Compare commits

...

364 Commits
1.2.2 ... 1.4.0

Author SHA1 Message Date
0ecf31d896 modify:User space memory access(arm64) 2017-10-24 10:29:11 +09:00
08a625cc0d modify:User space memory access
perf_event_open,futex,process_vm_readv,process_vm_writev,move_pages
2017-10-23 20:27:56 +09:00
12840601e1 support PERF_TYPE_{HARDWARE|HW_CACHE} in perf_event_open
refs #829
2017-10-20 23:10:20 +09:00
2ae6883a8b mcreboot.sh, mcstop+release.sh: Fix retry loop of shutdown 2017-10-19 01:54:46 +09:00
d5629606c5 mcexec: -m: interpret as numactl -m (i.e., MPOL_BIND)
Conflicts:
	executer/include/uprotocol.h
	executer/user/mcexec.c
	kernel/include/syscall.h
2017-10-18 16:54:34 +09:00
285059e504 mcexec: use -M for --mpol-threshold
Conflicts:
	executer/user/mcexec.c
2017-10-18 16:44:49 +09:00
5b6d0a887c Add ARM64 arch_rusage header 2017-10-18 09:23:08 +09:00
3573b8649e Guard call to gencore and freecore
The gencore() and freecore() code in gencore.c is guarded by
POSTK_DEBUG_ARCH_DEP_18, so the call to these functions should
also be guarded, otherwise linking fails.
2017-10-18 09:20:52 +09:00
d7523cdd84 Remove assignment of ns_per_tsc in struct monitor
struct member seems to have been removed or moved to struct
global_rusage
2017-10-18 09:20:52 +09:00
5753db5846 Add ihk_mc_syscall_number() for ARM by reading x8 2017-10-18 09:20:52 +09:00
2d7cb0af89 Add copy_fp_regs to ARM (same as for x86_64) 2017-10-18 09:20:52 +09:00
1cb9b435a9 Fix (?) build system
- disable -mno-red-zone for ARM
- add missing INCLUDEDIR
- make gencore.c compile
2017-10-18 09:20:52 +09:00
43ecf06e83 arch: x86 -> x86_64 and build system changes 2017-10-18 09:20:52 +09:00
51982de36b Handle return value of mcctrl_ikc_send in mcexec_handle_prepare_image 2017-10-18 09:20:51 +09:00
0a22320a3c Don't allocate memory for 0-page-sized requests
Previously the allocator would return all availble memory for a
request of 0 pages. This is rather counter-intuitive and left no
memory for subsequent allocations.
2017-10-18 09:20:51 +09:00
8813e890c5 Fix the check routine for elf sections 2017-10-18 09:20:51 +09:00
e664ffba18 Show context registers at the interrupt by SGI 6 2017-10-18 09:20:51 +09:00
3bd0137c25 Fix some race condition on arm64
* move barrier() to architecture depended region
* add barrier() in issue_ipi, kprintf, map_virtual
* enable the workaround for cavium thunderx
2017-10-18 09:20:51 +09:00
4f2b4aa402 Round the allocation for cpu-local variables up PAGE_SIZE
Previously, this resulted in 0 pages being allocated.
2017-10-18 09:20:51 +09:00
682cd34b74 Make mcstop+release architecture independent 2017-10-18 09:20:51 +09:00
2bc4d06a48 Add empty definition of visit_pte_range_safe()
This is for linking only. visit_pte_range_safe() is required only
for memdump, as far as i can tell. Since memdump is disabled anyway
I think it's ok to leave this function empty for now.
2017-10-18 09:20:51 +09:00
4f2c1e07c1 Add ARCH variable to Makefiles
In some Makefiles the ARCH variable was not set, although it was used.
In executer/user/Makefile.in it was used before it was set.
2017-10-18 09:20:50 +09:00
77bb3038d3 Add PT_ENTRIES macro 2017-10-18 09:20:50 +09:00
931448a94d Fix typo in page_align_up 2017-10-18 09:20:50 +09:00
c51bbbabc6 Change x86 to @ARCH@ in mcreboot-smp-x86.sh.in
since it is used for smp-x86 and smp-arm64
2017-10-18 09:20:50 +09:00
2ddc52e1a4 setitimer(): Fix error handling of copy_from_user()
This fixes POSTK_TEMP_FIX_40 (POSTK_DEBUG_TEMP_FIX_40)
2017-10-13 04:59:50 +09:00
3c93958c48 extend_process_region(): fix align_shift (POSTK_DEBUG_TEMP_FIX_68) 2017-10-17 15:07:57 +09:00
9763c40f64 set_robust_list: returns 0
refs #977
2017-10-16 09:54:23 +09:00
3bf77446cc mcreboot-smp-x86.sh: add extra_kopts param
This lets one specify arbitrary kernel parameters, instead of manually
fiddling with the script.
Could ultimately replace params like -t (turbo) and -d (dump_level) that
do not have any side effect (logmode starts a userland daemon)
2017-10-13 10:02:11 +09:00
c3dfb1663d page_fault_handler: do not try to fault addresses < 4k
There is no good reason to map these low addresses (userspace could with
mmap fixed, but that is grounds for many exploits...);

the main advantage however is if we do a null deref or close to (0->foo)
within a pagefault we will get a panic stack instead of getting a hang
because we cannot get some locks.
2017-10-13 10:02:11 +09:00
217dd9c1e5 x86 set_signal: panic if interrupt came from kernel
This makes debugging errors e.g. FPE from kernel much easier,
we really shouldn't be taking a user level coredump blaming user
in that case anyway
2017-10-13 10:02:11 +09:00
d4cd756a91 x86/cpu.c: unhandled page fault: print pre-fault stack
Do basic manual unwinding and print raw stack addresses, with a
suggested invocation of addr2line to pretty-print the result.
2017-10-13 10:02:11 +09:00
b894619d1b Speed up parallel builds
- make should be $(MAKE)
 - add + in front of rules spawning long-lasted make process in a
subshell. (This would not be needed with $(MAKE) -C .. target, but our
makefiles do not handle that because they use $(PWD))
 - split the main 'all' rule as all 4 targets are independant
 - fix dependencies where appropriate for parallelism

Extra, not speed-related changes:
 - remove some double-colon for targets as they do not need it

This cuts build time from 5s to 1.5s on a laptop with -j4, and more
importantly from 85s to 35s on a KNL node.
As a bonus, the fixed dependencies removes the need to clean before
rebuilding all the time. Probably.
2017-10-13 10:02:11 +09:00
b962da700b do_signal: ignore SIGWINCH
McKernel would terminate() running program on terminal resizing
It actually looks like there is nothing for us to do when we
get that anyway (tested with `dialog`)
2017-10-13 10:02:11 +09:00
196379854b Fix a few more harmless compiler warnings:
- myfree in pager.c was called with an argument, so add one to the
dummy definition
- pgoff is offset_t (unsigned) and doesn't need to be compared to 0
- clang says '*(int *)0 = 0' will be optimized away instead of keeping
the segfault without a volatile hint (?! that is wrong!), but it causes
no harm to add anyway.
2017-10-13 10:02:11 +09:00
d213efac79 mcctrl/sysfs: add parenthesis around SYSFS_UNLINK_KEEP_ANCESTOR check
! has more priority than &, so !flags & SYSFS_UNLINK_KEEP_ANCESTOR is
not very likely. Change to !(flags & SYSFS_UNLINK_KEEP_ANCESTOR)
2017-10-13 10:02:11 +09:00
38910fe13d mc_perf_event.h: s/EVNET/EVENT/ in the guard (improper ifndef) 2017-10-13 10:02:11 +09:00
4d4279121b process/vm; replace vm_range list by a rbtree
This replaces the chained list used to keep track of all memory ranges
of a process by a standard rbtree (no need of interval tree here
because there is no overlap)

Accesses that were done directly through vm_range_list before were
replaced by lookup_process_memory_range, even full list scan (e.g.
coredump).
The full scans will thus be less efficient because calls to rb_next()
will not be inlined, but these are rarer calls that can probably afford
this compared to code simplicity.

The only reference to the actual backing structure left outside of
process.c is a call to rb_erase in xpmem_free_process_memory_range.

v2: fix lookup_process_memory_range with small start address

v3: make vm_range_insert error out properly

Panic does not lead to easy debug, all error paths
are handled to just return someting on error

v4: fix lookup_process_memory_range (again)

That optimistically going left was a more serious bug than just
last iteration, we could just pass by a match and continue down
the tree if the match was not a leaf.

v5: some users actually needed leftmost match, so restore behavior
without the breakage (hopefully)
2017-10-13 10:00:27 +09:00
99da5b6484 ptrace: unify flags PT_TRACE_SYSCALL_ENTER and PT_TRACE_SYSCALL_EXIT to PT_TRACE_SYSCALL
refs #961
2017-10-11 15:43:57 +09:00
6b60dee890 ihklib: Fix ihklib_rusage.h for x86 2017-10-04 05:06:17 +09:00
dd08a3151e mcreboot: Fix version check for mcoverlayfs 2017-10-04 00:37:01 +09:00
e1442bf12b mcexec: Fix usage 2017-10-03 15:34:00 +09:00
86f297ddc4 mcreboot: Fix change umask for /proc and /sys files 2017-10-03 15:21:44 +09:00
823b222af9 mcreboot: Change umask for /proc and /sys files 2017-10-03 06:03:44 +09:00
9c25eb8ef2 mcoverlayfs: Fix version check 2017-10-02 19:51:30 +09:00
665eead78b do_wait: delegate process status for ppid_parent if child process is teacee
refs #946
2017-09-29 14:59:34 +09:00
f8ef43c77d Merge branch 'development' of pccluster.org:mckernel into development 2017-09-29 14:59:10 +09:00
8f4afe410f Remove obsolete pc_init(), pc_ap_init(), pc_test() 2017-09-29 13:20:01 +09:00
da9bb421cc ptrace: call ptrace_syscall_exit before check_signal
refs #960
2017-09-29 10:03:44 +09:00
1e89796d3e Replace ihk_set_kmsg() with ihk_get_kmsg_buf() 2017-09-27 20:26:23 +09:00
a1a2900606 ptrace: Fix the timing of save_fp_regs, and Add copy fp_regs to child in clone_thread
refs #702
2017-09-27 17:02:30 +09:00
79b977ac06 Check xgetbv availability before use for machines without it (i.e. KVM) 2017-09-26 19:31:34 +09:00
37e3118df6 mcexec: Add --stack-premap=<premap_size>[,<max>] to man page 2017-09-26 18:45:52 +09:00
be4d84c0c1 mcexec: Add --stack-premap=<premap_size>[,<max>]
<premap_size> of stack is pre-mapped on creating a process.
And its max size of stack is set to <max>.
This replaces MCKERNEL_RLIMIT_STACK=<premap_size>,<max>.
2017-09-26 17:04:10 +09:00
c43c1b640a execve: call ptrace_syscall_exit if execve successed
refs #945
2017-09-26 14:31:07 +09:00
e294db7e53 syscall: set syscall_return before calling ptrace_syscall_exit
refs #944
2017-09-26 14:29:02 +09:00
df3f388e09 syscall: set -ENOSYS to syscall_return before calling ptrace_syscall_enter
refs #943
2017-09-26 14:25:49 +09:00
a2fbe99b60 madvise: support MADV_DONTDUMP/DODUMP
refs #661
2017-09-26 14:21:40 +09:00
9c847c0a8f Change permission of mcoverlay-create/destroy.sh from 600 to 755 2017-09-26 14:05:54 +09:00
58c1fd4512 Update test programs for qlmpi (do swap with using shared memory, ib_pingpong) 2017-09-25 16:56:52 +09:00
dae9a5ff13 mcexec: verify argument for -n/-t/-c 2017-09-25 16:43:47 +09:00
4d9a1628f2 Add test programs for ihk_os_getrusage() 2017-09-20 19:48:32 +09:00
47b4bd5aba Installing mcexec.1 man page 2017-09-20 16:37:05 +09:00
ea831c614e mcexec man page 2017-09-20 16:37:00 +09:00
5b51eb80a3 Redirect kmsg to /dev/log and detect hungup
1. ihkmond retrieves kmsg when the amount of kmsg exceeds the threashold and
   /dev/mcosX is deleted
2. ihkmond periodically monitors OS status change to detect hungup
2017-09-20 15:25:19 +09:00
daa7526127 rusage and ihklib: Fix out-of-memory reporting and cleanup
1. Fix OOM: Count memory usage only when allocation succeeded
2. Fix OOM: Make user allocation fail when memory is running out
3. Fix OOM: Move rusage_init() before numa_init()
4. Cleanup: Rename ihkconfig/ihkosctl functions
5. Cleanup: Pass event type to eventfd()
6. Cleanup: arch/.../rusage.h --> arch/.../arch_rusage.h
2017-09-20 15:11:57 +09:00
a1af7edd6e ihk_os_create_pseudofs(): Add a function to prepare /proc and /sys 2017-09-20 15:11:57 +09:00
c5d71c325d Modify copyright of files related to XPMEM 2017-09-20 15:11:57 +09:00
aa7cb970c4 ihk_os_getrusage(): Compile LWK-specific results in mcctrl
1. User asks mcctrl for the result via ihk_os_getrusage() with passing void *
2. mcctrl compiles the results and passes them to the user
3. User interprets it by using the type defined in the LWK-specific header
2017-09-20 15:03:45 +09:00
5664125e57 mcexec: verify number of processes for partitioned execution 2017-09-21 16:11:56 +09:00
203bfc2492 mcexec: limit nr. of threads for non-OpenMP partitioned execution 2017-09-21 15:30:37 +09:00
973d8ddd2c remove kernel/gencore.c
That file is not compiled (there are arch/*/kernel/gencore.c variants)
2017-09-12 18:27:28 +09:00
a491e49bbc syscall.c: fix misleading indent
This is a non-fonctional change, brought to attention by newer gcc warnings
2017-09-12 18:27:28 +09:00
2f9af42b2e configure: set KERNELSRC with double-quotes
This evals 4.12.5-300.fc26.x86_64 right away, which is necessary for e.g.
the variable used for System.map detection.
(looking at history, ihk always had double-quotes while mckernel always
had singles quotes -- this looks like a manual copy typo?)
2017-09-12 18:27:28 +09:00
b3613e2535 configure: check for read access on system.map
This lets us fallback gracefully to /System.map, which is
more open by default and binary identical on rhel systems
2017-09-12 18:27:28 +09:00
2a46fd0b2d compiler.h: take in recent linux updates for newer gcc support
Had to remove from original compiler-gcc:
 - things that deal with types, e.g. READ_ONCE macro and friends;
 - #define barrier(). This one would be better there at some point.
2017-09-12 18:27:28 +09:00
230272438f init_fpu: only call xgetbv if we have XSAVE cpuid
xgetbv crashes on cpu without avx, the XSAVE bit seems to indicate
that it is ok to call xgetbv
2017-09-12 18:27:28 +09:00
99a45f20c2 eliminate POSTK_DEBUG_TEMP_FIX_55: fixes preemption bug 2017-09-12 18:21:56 +09:00
43db8e2d65 remove osnum from mckernel kargs. refs #338 2017-09-12 14:53:44 +09:00
4eed36f124 procfs: support /proc/pid/status State field
refs #445
2017-09-12 13:37:27 +09:00
cdfa4015b7 load_elf: check mckernel execution
refs #758
2017-09-12 13:15:22 +09:00
a05b6e1ba8 Expand dump-functions for excluding user/unused memory (This is rebase commit for merging to development) 2017-09-11 15:49:04 +09:00
325082a571 adapt "out of tree build" for arm64 2017-09-11 15:29:53 +09:00
0278a876db disable POSTK_DEBUG_* on x86_64 2017-09-07 22:20:22 +09:00
707b245009 diable swap out/in in qlmpi 2017-09-07 16:06:56 +09:00
8cc264d794 fix build error with "out of tree build"
- change include-path
- enable memdump by default
2017-09-06 21:54:43 +09:00
9a550b310c Add hwcap.h for x86 2017-09-06 11:10:32 +09:00
9989f41fd3 add arm64 support
- add arm64 dependent codes with GICv3 and SVE support
- fix bugs based on architecture separation requests
2017-09-05 15:06:27 +09:00
704096b139 profile: fix process level aggregation bug 2017-09-04 09:07:58 +09:00
99ca46663b mcctrl, mexec: fix a bunch of warnings 2017-09-04 08:53:32 +09:00
90fbfd6f7d clear_range_l3(): remove debug message 2017-09-04 08:33:42 +09:00
f4c32e5507 qlmpi: add testcase to qlmpi (rusage for swap) 2017-08-31 15:43:28 +09:00
4b3f220659 qlmpi: fix debugging part of swap 2017-08-31 14:04:11 +09:00
82a0f155d8 qlmpilib.c:fix ql_init() to static 2017-08-30 17:00:37 +09:00
a2b8235e83 Add -rpath to mcexec 2017-08-30 16:54:48 +09:00
b53fb5f5cb qlmpi: export qlmpilib.h 2017-08-30 10:37:36 +09:00
236a072311 Add qlmpi and swap to mckernel (This is rebase commit for merging to development) 2017-08-29 15:04:58 +09:00
74f15783d2 ihk_os_getrusage(): Add per-page-size memory usage accounting 2017-08-17 12:49:34 +09:00
184c2d311c fileobj_flush_page(): Not flush when MF_HOST_RELEASED 2017-08-17 12:49:34 +09:00
75e2bb7793 mcctrl: Fix debug messages 2017-08-17 12:49:34 +09:00
6d4d6440aa terminate(): clean-up and formatting 2017-08-08 11:12:55 +09:00
9194742de8 do_mmap(): fix calculation of search_free_space() hint 2017-08-01 16:24:07 +09:00
831a0637a1 delete debug print 2017-08-01 15:27:51 +09:00
ac432504a7 uti_attr: move kmalloc after error check 2017-07-28 10:31:59 +09:00
b39fec1104 uti: remove unused functions 2017-07-26 13:14:30 +09:00
86dedc32fa Eliminate Japanese comments 2017-07-15 20:04:16 +09:00
effde241b9 support uti_attr for utility thread offloading 2017-07-25 13:03:48 +09:00
101cab5b0a remove debug print 2017-07-25 13:02:17 +09:00
4cd1c120fa profile: add PROFILE_remote_page_fault 2017-07-23 19:00:00 +09:00
bf5ac7afc8 remote_flush_tlb_array_cpumask(): bundle remote TLB invalidations 2017-07-21 15:34:48 +09:00
bc423255d9 mcctrl/mcexec: limit thread pool size when too many threads exist on Linux 2017-07-21 15:33:19 +09:00
6714161c25 profile remote TLB invalidations 2017-07-20 22:28:25 +09:00
992a292c08 profile: better time breakdown and exclusion of idle cycles 2017-07-20 17:36:34 +09:00
64c2e437c6 open: check filename address (re-commit) 2017-07-19 11:37:55 +09:00
dd9675d65e NUMA: only print a short summary at boot time 2017-07-19 09:11:44 +09:00
51ed8dce06 numa_init(): fix rusage memory counting 2017-07-19 08:23:05 +09:00
01f5e46865 revert 2d7890731e 2017-07-18 12:13:48 +09:00
38961fca78 Revert "do_fork(): RLIMIT_NPROC check"
This reverts commit 035e7913d8.
2017-07-13 04:13:41 +09:00
2d7890731e add_process_memory_range: do not initialize page when did not present phys page 2017-07-18 00:45:18 +09:00
7d181fccd9 open: check filename address 2017-07-18 00:09:39 +09:00
bd75e80df2 terminate: fix to reference freed pointer 2017-07-17 19:32:08 +09:00
035e7913d8 do_fork(): RLIMIT_NPROC check
1. mcexec sets RLIMIT_NPROC to the number of mcexec threads.
2. do_fork() gets the current number of threads by calling rusage function.
3. do_fork() returns -EAGAIN when the limit is exceeded.
2017-07-12 20:42:38 +09:00
7d38c7c147 delete debug print 2017-07-14 10:13:22 +09:00
a801bcc591 delete rusage.c 2017-07-14 09:52:33 +09:00
d7b8e7f4f4 fix to count user pages
refs #864
2017-07-14 09:51:39 +09:00
6afea4af48 mcexec: Fix debug/error messages 2017-07-12 14:30:21 +09:00
6415dcfdcc mcexec: Disable address space layout randomization
Move the code from mcreboot.sh to mcexec.c.
2017-07-12 14:17:38 +09:00
0f58e9e77d NUMA: expose correct /sys/devices/system/node/nodeX/meminfo 2017-07-07 00:59:32 +09:00
72e3f5ee50 ihk_mc_get_ikc_cpu(): Get IKC destination CPU 2017-07-11 20:20:40 +09:00
8d57ad9bc4 pmc_start, pmc_stop: Error check on counter number 2017-07-11 19:05:45 +09:00
35b36c2d33 move_pages_smp_handler(): more parallelization 2017-07-08 18:36:13 +09:00
632611d78c mbind(): debug msg 2017-07-08 18:36:13 +09:00
d48d44d365 move_pages(): fix barrier in parallel implementation 2017-07-08 18:36:13 +09:00
4c0f401424 move_pages(): parallel implementation v1 2017-07-08 18:36:05 +09:00
06f824c829 pte_update_phys(): update physical address of a PTE 2017-07-08 18:36:05 +09:00
7a606baad4 move_pages(): sequential implementation 2017-07-08 18:36:05 +09:00
4c6c66555e memset_smp(): parallel memset 2017-07-08 18:36:05 +09:00
8426cf589a ihk_pagealloc_free(): report double-free in bitmap based allocator 2017-07-08 18:36:05 +09:00
da7421e8ee memdebug: more detailed error report 2017-07-08 18:36:05 +09:00
209748d913 visit_pte_range(): visit L1 PTEs but don't free for MF_PREMAP files 2017-07-08 18:36:04 +09:00
f81722c63b __mckernel_free_pages_in_allocator(): fix deallocation of invalid physical range 2017-07-08 18:35:50 +09:00
2189c55d99 x86: ASM fast memset() 2017-07-08 18:26:51 +09:00
201a7e2595 Red-black tree based physical memory management 2017-07-08 18:26:51 +09:00
5cdd194856 Port Linux red-black trees 2017-07-08 18:12:01 +09:00
0061adadfb temporary fix for bug #889 2017-07-04 12:04:37 +09:00
67843151d3 fix how to count rss and num of threads
refs #864
refs #865
2017-07-03 16:27:46 +09:00
083cf3fcc9 rusage_max_memory is set sum of all memory chanks
refs #891
2017-07-03 14:49:35 +09:00
4236323661 add SCD_MSG_EVENT_SIGNAL
refs #862
2017-07-03 14:49:13 +09:00
5a9bee55c9 kill system call offloading from interrupt_syscall (tid == -1) change to one sided communication
refs #889
2017-07-03 14:48:42 +09:00
6e23b07b20 disable switch until to complete thread termination
refs #888
2017-07-03 14:47:48 +09:00
e64bd49d9e Add comment for x86_sregs 2017-07-03 10:43:36 +09:00
72b8f99d3b Correct comment for do_page_fault_process_vm() 2017-07-03 10:43:36 +09:00
090937a5a3 fix out of tree build 2017-06-30 09:57:50 +09:00
2082acdf0d add executer/user/arch/x86_64/Makefile.in 2017-06-28 09:36:31 +09:00
a8f11634e6 remove debug print for uti tracer 2017-06-27 14:42:04 +09:00
4f9865cc8f clean up unused code 2017-06-27 13:46:38 +09:00
07efb3ab9a support to utility thread offloading 2017-06-27 13:27:09 +09:00
2afc9d37d1 fix config.h inclusion 2017-06-17 07:05:33 +09:00
fa6f20a3c4 Correct comments in gencore.c 2017-06-16 21:47:23 +09:00
52bc052e1a mcexec: recursively bind mount $prefix/rootfs/ on / 2017-06-16 18:01:25 +09:00
f84415c310 mcexec: use atobytes() for MCKERNEL_RLIMIT_STACK 2017-06-15 16:50:34 +09:00
1a853e07d7 rus_vm_fault(): fix misaligned address before accessing PTE 2017-06-14 20:32:03 +09:00
07b0954610 IKC: add ihk_ikc_direction to ihk_ikc_listen_param. refs #841 2017-06-13 16:33:15 +09:00
1f006b2381 remote_page_fault(): free remote PF response packet to avoid memory leak 2017-06-12 22:03:12 +09:00
4dfd806aa7 mcctrl: release syscall packets to LWK -> Linux channels 2017-06-12 22:02:32 +09:00
c6e3185246 mcctrl: clean up RUS page hash at job completion 2017-06-12 13:04:03 +09:00
d9e6ff235d mcctrl: track and clean up ikc2linux channels 2017-06-12 13:03:07 +09:00
b03f69783a mcctrl: cleanup devobj pagers in release_handle() to avoid memory leak 2017-06-11 19:13:31 +09:00
ab915f3331 mcctrl: clean up pagers for file objects to avoid memory leak 2017-06-11 19:11:54 +09:00
7773c4aef6 add log print for existing processes/threads
usage: ihkosctl 0 ioctl 40000000 [1-4]
1: print for existing processes
2: print for existing threads
3: print for existing processes without process lock
4: print for existing threads without thread lock
2017-06-11 15:19:24 +09:00
58e531eb58 mcreboot: add taskset -c 0 to insmod. refs #848 2017-06-09 17:18:45 +09:00
9beef7d901 sysfs: fix directory memory leak 2017-06-09 15:51:41 +09:00
0733592eb5 mcexec_open_exec() fix filename memory leak 2017-06-09 15:51:14 +09:00
4d0e0728f4 destroy_thread(): disable IRQ while holding update lock 2017-06-08 17:40:35 +09:00
66fad4c7a4 terminate(): do not iterate process hash if no children processes exist 2017-06-08 14:53:57 +09:00
5758dba7cf use spinlocks in MCS rwlock 2017-06-08 14:16:29 +09:00
1ca16b9693 rusage: add kernel/include/config.h.in 2017-06-08 09:02:52 +09:00
d29922c820 configure: re-autoreconf 2017-06-07 17:33:32 +09:00
46b48ac59b __return_syscall(): verify response structure 2017-06-07 17:21:55 +09:00
446ef0465b mcctrl: verify ihk_device_map_virtual()'d buffer before accessing 2017-06-07 17:21:55 +09:00
200fe9aec4 mcctrl/mcexec: fix per-process data reference counting 2017-06-07 17:21:55 +09:00
fedba28a93 extend_process_region(): fix alignment 2017-06-07 17:21:55 +09:00
b527503937 Fix rusage 2017-06-07 15:15:20 +09:00
6bdafbd33b Fix rusage 2017-06-07 09:30:42 +09:00
12e7ed644f fileobj_flush_page(): do not offload for files with MF_HOST_RELEASED flag set 2017-06-05 22:20:25 +09:00
edf059888d support rusage parameter of wait4
refs #857
2017-05-28 07:52:47 +09:00
a66fb96cd9 re-autoconf 2017-05-28 07:52:38 +09:00
dd2ef89997 SMP: generic function call facility for CPU sets 2017-05-28 07:41:48 +09:00
ba7edf1981 move out local IRQ vector definitions to shared header 2017-05-28 07:36:21 +09:00
a669fc5125 extend_process_region(): align to heap extension 2017-05-26 15:45:57 +09:00
c0cabc2d83 brk(): return old address if memory allocation fails 2017-05-26 15:41:38 +09:00
e306b1e838 fileobj_create(): fix --mpol-shm-premap for Quadrant mode 2017-05-31 08:33:29 +09:00
0c3b705f98 brk(): make aggressive heap extension optional 2017-05-24 01:41:54 +09:00
9f55263528 mcexec: atobytes() to convert size string to # of bytes 2017-05-24 01:41:54 +09:00
74c5f61fd5 mmap(): fix populate_len warning 2017-05-24 01:41:54 +09:00
cadb66e5c1 init_host_ikc2linux(): adjust minimum queue size 2017-05-23 20:00:09 +09:00
9b5ccb5a33 Pre-map file mappings from /dev/shm (--mpol-shm-premap mcexec argument) 2017-05-23 20:00:06 +09:00
c5079898c2 mckernel_allocate_aligned_pages_node(): support explicit NUMA node designation 2017-05-23 19:58:52 +09:00
746b459e7f profile: more detailed profiling of file PFs 2017-05-23 19:58:52 +09:00
4c42086154 profile: fix job level clearing 2017-05-23 19:58:52 +09:00
56ee0787c9 profiler: function to clear process level logs 2017-05-23 19:58:52 +09:00
e901d42fb6 mcexec: --extend-heap-by: argument to specify heap extension size 2017-05-23 19:58:49 +09:00
29ab087fa2 execve(): larger allocation for program descriptor 2017-05-23 19:57:08 +09:00
105d373765 PROFILE_page_fault_XXX: more detailed page PF profiling 2017-05-23 19:57:08 +09:00
0dd2fad33b brk(): more forceful heap extension 2017-05-23 19:57:08 +09:00
e554f4e2f9 mcexec: --disable-sched-yield: avoid kernel/user switch 2017-05-23 19:57:08 +09:00
a256280118 PROFILE_mmap_XXX: more detailed mmap profiling 2017-05-23 19:57:08 +09:00
d75be7228b PROFILE_mmap_anon_no_contig_phys: profile ANON mmap()s that couldn't be backed by contiguous physical memory 2017-05-23 02:42:06 +09:00
923dc4aa11 PROFILE_mpol_alloc_missed: profile allocations that fail to satisfy user requested memory policy 2017-05-23 02:42:06 +09:00
e3e0f6a174 mcexec: introduction of --profile 2017-05-23 02:42:06 +09:00
dd6f721e03 profile: job level event accumulation 2017-05-23 02:42:06 +09:00
9c25d47d9b mcexec: transfer job information to LWK 2017-05-23 02:42:06 +09:00
5a4148aaaf ___kfree(): disregard NULL pointer argument 2017-05-23 02:42:06 +09:00
32c8f6192d unhandled_page_fault(): print registers for kernel mode PF 2017-05-23 02:42:05 +09:00
e2f424846c profile: rewrite syscall tracker for generic profiling code 2017-05-23 02:42:05 +09:00
989af7e045 mcexec: RLIMIT_STACK handling 2017-05-23 02:39:42 +09:00
721cee05a2 MPOL default threshold to 0 2017-05-23 02:39:42 +09:00
86aa76e088 IKC: increase ikc2linux channels' queue size 2017-05-23 02:39:42 +09:00
ab113658f1 mcexec: --no-bind-ikc-map for optionally disabling binding 2017-05-23 02:39:42 +09:00
2d72042021 mcexec: bind to CPus according to ikc_map 2017-05-23 02:39:42 +09:00
610463ff39 sched_setaffinity(): respect process cpu_set 2017-05-23 02:39:42 +09:00
dfb0a37305 procfs: increase procfs request timeout 2017-05-23 02:39:42 +09:00
26b9484bae mcexec: --mpol-threshold to control MPOL_BIND/MPOL_PREFERRED 2017-05-23 02:39:42 +09:00
b4aecfd43c partitioned execution: order by process start time 2017-05-23 02:39:42 +09:00
bf036f19f7 mcreboot: offline/re-online RAM before IHK reserve 2017-05-23 02:39:42 +09:00
182202523e mcexec/mm: user memory policy control for heap, stack, etc. 2017-05-23 02:39:42 +09:00
afb7cb3a1e BSS/data: demand paging for non-file section and respect user requested NUMA allocation policy 2017-05-23 02:39:41 +09:00
fdbdcbd0ee VR_AP_USER: memory range flag to respect user mempolicy (e.g., in PF handler) 2017-05-23 02:39:41 +09:00
a18fd1f45c sched_yield(): optionally disable wait 2017-05-23 02:39:41 +09:00
d8170e292c init_process_stack(): debug msg format 2017-05-23 02:39:41 +09:00
fee5234c54 stack: force transparent large pages 2017-05-23 02:39:41 +09:00
6309095fd2 brk(): force transparent large pages 2017-05-23 02:39:41 +09:00
b005adc103 SCD_MSG_PERF_CTRL: use IKC3 channel for response packet 2017-05-20 12:43:08 +09:00
21373338cc mcctrl: IHK CPU register manipulation implementation 2017-05-20 12:38:14 +09:00
39352cd364 event_signal(): use IKC3 ikc2linux channel 2017-05-19 10:31:15 +09:00
84025cc9cb configure : add option --enable-rusage 2017-05-19 10:31:14 +09:00
04cbfbb025 xpmem: porting xpmem v2.6.3
implement xpmem_get, xpmem_release, xpmem_attach, xpmem_detach
2017-05-19 10:30:36 +09:00
ba58054c9d create rusage branch. 2017-05-19 10:30:36 +09:00
7fd55dc83f IKC: only cpu 0 check the master-channel 2017-05-19 10:26:30 +09:00
d66af42f7b Revert "IKC: separate IRQ between Master-channel and Regular-channel"
This reverts commit 3c98b9410966ceebe187ebae1038317b628fbb03.
2017-05-19 10:26:30 +09:00
4b964b8e0d IKC: allocate Linux channel table dynamically 2017-05-19 10:26:30 +09:00
65dc3440cb IKC: separate IRQ between Master-channel and Regular-channel 2017-05-19 10:26:30 +09:00
fbd9086ce5 IKC: delete recieve channel list 2017-05-19 10:26:29 +09:00
c2b1d8e3ef IKC: delete the comments for review 2017-05-19 10:26:29 +09:00
e2d59e2cb9 mcreboot-smp: introduction of ikc_irq_start argument 2017-05-19 10:26:29 +09:00
3de0f5ea19 mcreboot-smp: introduction of ikc_map argument 2017-05-19 10:26:29 +09:00
373e9ea63c ap_wait(): init syscall channel with proper Linux remote CPU 2017-05-19 10:26:29 +09:00
8daffa939e IKC: distribute IKC-interrupt to Linux cpus. 2017-05-19 10:26:29 +09:00
eaa4d35fab do_migrate(): don't clear oversubscribed source CPUs from remote TLB mask 2017-05-17 11:22:29 +09:00
a968c935b5 Fix timing of save/restore smp_affinity, and modifing of /proc/irq/*/smp_affinity 2017-05-15 14:52:22 +09:00
e01f6dd6ea eclair: obtain kernel_base from dump_mem_chunks_t 2017-05-12 13:23:23 +09:00
a07d802cbe Fix manipulation of /proc/irq/*/smp_affinity
Fix the case where
(1) #CPUs % 32 == 0
(2) #CPUs % 4 != 0
2017-05-12 09:35:49 +09:00
1e442cce10 mcklogd: fixed termination method of mcklogd 2017-05-09 16:28:21 +09:00
3f870b69a6 mcklogd: change the timing of start/stop. 2017-05-09 16:06:07 +09:00
0fef80cb19 SCD_MSG_CPU_RW_REG: use syscall channel for reply packet in CPU MSR read/write operation 2017-05-05 00:16:02 +09:00
9992fe0d72 mcctrl: support remote CPU MSR read/write operations 2017-05-05 00:01:43 +09:00
2d19ed9391 configure.ac: check NUMA development library 2017-04-29 05:30:27 +09:00
2f2f04d5a1 mcexec: ENABLE_MCOVERLAYFS on CentOS for up to version 7.3 2017-04-29 05:10:21 +09:00
1541b26086 ihklib: add pa_info functions. 2017-04-27 17:13:49 +09:00
e6c4d7731d Merge remote-tracking branch 'origin/rusage'
Conflicts:
	configure
	kernel/process.c
2017-04-27 15:10:38 +09:00
94b527e027 modified: lib/include/ihk/rusage.h 2017-04-27 14:47:21 +09:00
8c9b207557 configure : add option --enable-rusage 2017-04-27 14:00:59 +09:00
dacb05844b mcoverlayfs: support compile up to 3.10.0-514 2017-04-20 00:48:56 +09:00
c3ec5d20ca configure: --with-uname_r: optionally specify target kernel version string 2017-04-20 00:48:56 +09:00
92a40f92dd mcctrl_put_per_proc_data(): do not use task_pid_vnr() in IRQ context 2017-03-30 15:02:57 +09:00
45bddf3caa mcexec_syscall(): do not use task_pid_vnr() in IRQ context 2017-03-30 14:56:57 +09:00
b7671fedd3 mcctrl_per_proc_data: comments 2017-03-30 14:51:24 +09:00
c38d536aaa xpmem: porting xpmem v2.6.3
implement xpmem_get, xpmem_release, xpmem_attach, xpmem_detach
2017-03-29 18:20:53 +09:00
4ee0c05e08 mcoverlayfs: fix NULL pointer dereference on ovl_dentry_release() 2017-03-28 21:52:41 +09:00
f2ab0193e5 fix to panic when thread end and signal overlap. 2017-03-28 11:31:27 +09:00
ef910fdf0e Discard outstanding system calls at the end of mcexec. 2017-03-28 11:23:54 +09:00
b97a8c5138 mcexec_open_exec(): use strncpy_from_user() before accessing file name 2017-03-21 20:13:12 +09:00
034d10b185 When receiving a signal during fuex processing, the signal is not processed. 2017-03-21 20:37:17 +09:00
3fe2257929 create rusage branch. 2017-03-15 23:22:51 +09:00
eca4018ecb mcctrl: release syscall packets when mcexec termination
refs #835
2017-03-11 20:57:54 +09:00
e936b2ebe1 memobj_release: don't call syscall_generic_forwarding after process termination
refs #816
2017-03-10 12:58:47 +09:00
d8112f92f8 terminate(): don't call free_all_process_memory_range
refs #816
2017-03-08 14:30:28 +09:00
1076010de4 Boundary check in early_alloc_pages() 2017-03-04 17:21:57 +09:00
da4a5ec44b page_allocator_init(): move memory_nodes to BSS 2017-02-24 19:33:25 +09:00
d35aa9b100 page_allocator_init(): clean-up code, eliminate initial flag 2017-02-24 14:25:22 +09:00
ba8dbf1b19 Put kernel image and page table into one chunk 2017-02-24 14:21:32 +09:00
6213f0e488 mcctrl: fix cpumask macros for Linux 4.6 2017-02-02 15:49:39 +09:00
4ef82c2683 OFP-SNC-4: offline/online MCDRAM before memory reservation 2017-01-30 14:47:36 +09:00
e066a8798c IKC: adjust master channel queue size to nr. of CPUs 2017-01-30 07:24:09 +09:00
b702c9691e AP init: synchronize syscall channel initialization 2017-01-30 07:24:09 +09:00
addbe91e59 do_migrate(): signal migrated thread before releasing runq lock 2017-01-30 07:24:09 +09:00
b812848a0e eclair-dump-backtrace.exp: handle user space threads 2017-01-30 07:24:09 +09:00
ad214c8206 reserve_user_space(): mutual exclusion on mmap 2017-01-30 07:24:09 +09:00
1bc3218fc1 partitioned execution: bind mcexec to corresponding NUMA node 2017-01-30 07:24:09 +09:00
5cc420a6c3 syscall/offload tracker: clean-up and support process-wise aggregation 2017-01-30 07:24:09 +09:00
c7686fdf4e execve(): fix memory leak 2017-01-30 07:24:09 +09:00
c1dae4d8b0 mmap(): no physical memory pre-allocation for Intel 128MB mapping 2017-01-30 07:24:08 +09:00
2473025201 do_mmap(): remove codes for debug
refs #395
2017-01-16 15:53:27 +09:00
fa5c1b23ca eclair-dump-backtrace.exp: dump full backtrace of all mckernel threads 2017-01-15 10:46:07 +09:00
f2f499aace mcreboot/stop: toggle address-space layout randomization (ASLR) to avoid mcexec user-space reservation failure 2017-01-15 10:36:50 +09:00
bd47b909bf futex(): spin wait when CPU not oversubscribed and fix lost wake-up bug 2017-01-13 08:43:25 +09:00
d646c2a4b9 cpu_set/clear(): unsigned long for IRQ flags 2017-01-13 08:43:25 +09:00
865ada46bf IKC2: eliminate unused IKC structures 2017-01-13 08:43:25 +09:00
cdffc5e853 do_syscall(): eliminate centralized lock for exit/kill code path (use IKC2 thread pool) 2017-01-08 14:16:10 +09:00
0e67e9266b ap_init(): reformat AP cores report 2017-01-08 14:16:10 +09:00
1ff0afe6fb devobj/fileobj: do not try to free memory for device file mappings 2017-01-08 14:16:10 +09:00
d34884f9a4 numa_init(): error handling and propagation 2017-01-08 14:15:51 +09:00
7a0c204dc1 eclair: report PID for all threads 2017-01-08 14:15:44 +09:00
25f67c9ef8 mcreboot/mcstop-smp-x86: surpress libkmod warnings 2017-01-08 14:15:34 +09:00
a776464a7e mcreboot/mcstop: adjust swappiness 2017-01-03 09:02:41 +09:00
c40e7105e6 NUMA: order nodes by distance for MPOL_BIND / MPOL_PREFERRED policies as well 2017-01-03 09:02:29 +09:00
5bac38ce8b mmap()/stack/heap: follow user requested NUMA policy 2016-12-31 19:38:05 +09:00
e3f0662130 allocate_aligned_pages_node(): debug msg format 2016-12-31 16:25:14 +09:00
21df56b233 sched_wakeup_thread(): memory barrier after status update 2016-12-31 10:44:13 +09:00
393cec513c allocate_aligned_pages_node(): follow user policiy only for user allocations 2016-12-31 10:10:42 +09:00
4437ecc69a do_mmap(): indicate user level allocations for anonymous mappings 2016-12-31 10:09:49 +09:00
40d75baca2 ihk_mc_ap_flag: rewrite flag type, intro for denoting user level allocations 2016-12-30 19:19:34 +09:00
00f3fe0840 ihk_mc_alloc_aligned_pages_node(): support for explicit indication of target NUMA node 2016-12-30 19:03:59 +09:00
47a8b5bda5 mmap(): faster pre-allocation for anonymous private mappings 2016-12-30 17:18:44 +09:00
ec75095073 add_process_memory_range(): optionally return range object 2016-12-30 15:51:17 +09:00
1794232989 irqbalance_mck: create environment file in /tmp to avoid race condition on PFS 2016-12-30 15:47:44 +09:00
40978d162e procfs_read/write(): rewrite synchronization for scalability and correctness 2016-12-28 14:17:17 +09:00
536ce9f927 process_procfs_request(): use IRQ save MCS locks while iterating thread list to avoid deadlock 2016-12-28 12:29:10 +09:00
4e5ec74ffe mmap(): fault in memory only up to file size for populated file mappings 2016-12-27 16:33:24 +09:00
a6d8125fd7 mcreboot-smp-x86: reserve memory first and then CPUs 2016-12-27 15:19:05 +09:00
15d3a0361e destroy_ikc_channels(): eliminate kprint from error free path 2016-12-27 11:52:24 +09:00
6ad84a96a3 mcexec_syscall(): avoid calling task_pid_nr_ns() in IRQ context 2016-12-26 20:43:17 +09:00
16e846e9b6 mcexec: report error in prepare_image() if wait queue interrupted 2016-12-26 20:42:31 +09:00
5bc7185f07 do_migrate(): update debug msg format 2016-12-25 17:34:26 +09:00
32462dfb2d eclair: fix CPU number display for non-active threads 2016-12-25 17:28:31 +09:00
e3ef88c0cf do_sigsuspend(): deschedule thread when neccessary (fixes gdb deadlock) 2016-12-25 17:24:32 +09:00
829aae7b8d mcexec: PATH_MAX buffer lenght in do_generic_syscall() 2016-12-25 17:20:14 +09:00
b836b84825 mcexec_prepare_image(): use memory barrier when updating request status 2016-12-25 17:19:14 +09:00
3e1f154412 patch_process_vm(): eliminate kprintfs from error free code path 2016-12-25 17:18:20 +09:00
e7af537452 get_pid_cred(): proper locking around pid_task 2016-12-25 17:17:27 +09:00
3565959af7 eclair: fix compiler warnings 2016-12-23 09:57:50 +09:00
4667136a4c mcctrl: refcount per-process data to avoid corrupted syscall request lists 2016-12-23 09:54:15 +09:00
972d14611a mcctrl: move prepare waitqueue to per-process data 2016-12-22 10:15:31 +09:00
e90eef8910 eclair: support for direct memory inspection 2016-12-21 21:55:32 +09:00
f81927b85b Revert "brk(): larger allocation units internally"
This reverts commit c58ab0f648.
2016-12-20 11:11:09 +09:00
701cdcdab1 use MCS locks in physical memory allocator 2016-12-19 12:57:59 +09:00
9635a628a9 fileobj/shmobj/devobj: add file size to memobj 2016-12-19 12:55:12 +09:00
3e1b16f3fc syscall_channel: increase queue size to avoid deadlock in ikc_send() 2016-12-18 21:12:38 +09:00
ff37ff9ccf memobj: synch prefetch among processes 2016-12-18 21:12:38 +09:00
5b7bcb7170 fileobj: use read/write MCS locks in page hash 2016-12-18 21:12:37 +09:00
6a5fe90f98 mcexec_get_cpuset(): save CPU set and IKC target cpu in per-process data 2016-12-18 21:12:37 +09:00
91373337ba mcctrl: add IKC target CPU to OS file release_handler 2016-12-18 21:12:37 +09:00
56ed726a88 pager_req_create(): prefetch for MPI library and zerofill for shm 2016-12-18 21:12:37 +09:00
bce10e11e4 fileobj: rewrite for scalability using per-file page hash 2016-12-18 21:12:37 +09:00
91cdb16158 MCS lock: separate IRQ disable/enable versions 2016-12-18 21:12:37 +09:00
c58ab0f648 brk(): larger allocation units internally 2016-12-18 21:12:37 +09:00
f410af1cfc xpmem: porting xpmem v2.6.3
implement xpmem_make, xpmem_remove
2016-12-16 17:00:09 +09:00
aa15e5eea8 mcexec: -t option and OMP_NUM_THREADS for thread pool size 2016-12-14 18:56:30 +09:00
df9f1f8f78 allocate_aligned_pages(): take user set NUMA policy into account 2016-12-13 17:51:39 +09:00
7ace35d737 mcexec_get_cpuset(): fix NUMA search bug 2016-12-13 17:50:50 +09:00
551999ff6b NUMA: order nodes based on distances 2016-12-13 10:46:17 +09:00
052b3f44ca mcexec: -n: topology aware partitioned execution 2016-12-10 16:27:57 +09:00
fdcf766337 prepare_process(): pass cpu_set in program_load_desc 2016-12-09 16:32:20 +09:00
7d13bfb14e set_mempolicy(): limit maxnode to PROCESS_NUMA_MASK_BITS 2016-12-08 21:05:10 +09:00
202bfd9955 IHK-API: expand and fix for ver 1.2. 2016-12-08 17:28:53 +09:00
c99e36235b execve(): disable debug warnings 2016-12-08 16:33:24 +09:00
3cecafac59 obtain_clone_cpuid(): respect parent's CPU set 2016-12-08 16:01:30 +09:00
61fc4c5e55 show_context_stack(): fix warning 2016-12-07 11:42:09 +09:00
fad73cacc1 x86: display call stack for IRQ 133 (for debug) 2016-12-07 11:32:02 +09:00
8fced29978 page_fault_handler(): improved debug msg format 2016-12-07 11:25:02 +09:00
b0f4ae4890 ihk_mc_pt_set_pte(): double check phys address alignment 2016-12-07 11:23:45 +09:00
7070094a31 ihk_mc_pt_print_pte(): handle large pages correctly 2016-12-07 11:13:53 +09:00
011185e3f7 __ihk_pagealloc_large(): fix 1GB page alignment bug 2016-12-07 09:38:37 +09:00
461881e46a /proc/mckernel to indicate McKernel 2016-12-06 14:29:25 +09:00
607 changed files with 71816 additions and 3932 deletions

View File

@ -1,15 +1,24 @@
TARGET = @TARGET@
SBINDIR = @SBINDIR@
INCDIR = @INCDIR@
ETCDIR = @ETCDIR@
MANDIR = @MANDIR@
all::
@(cd executer/kernel/mcctrl; make modules)
@(cd executer/kernel/mcoverlayfs; make modules)
@(cd executer/user; make)
@case "$(TARGET)" in \
attached-mic | builtin-x86 | builtin-mic | smp-x86) \
(cd kernel; make) \
all: executer-mcctrl executer-mcoverlayfs executer-user mckernel
executer-mcctrl:
+@(cd executer/kernel/mcctrl; $(MAKE) modules)
executer-mcoverlayfs:
+@(cd executer/kernel/mcoverlayfs; $(MAKE) modules)
executer-user:
+@(cd executer/user; $(MAKE))
mckernel:
+@case "$(TARGET)" in \
attached-mic | builtin-x86 | builtin-mic | smp-x86 | smp-arm64) \
(cd kernel; $(MAKE)) \
;; \
*) \
echo "unknown target $(TARGET)" >&2 \
@ -17,13 +26,13 @@ all::
;; \
esac
install::
@(cd executer/kernel/mcctrl; make install)
@(cd executer/kernel/mcoverlayfs; make install)
@(cd executer/user; make install)
install:
@(cd executer/kernel/mcctrl; $(MAKE) install)
@(cd executer/kernel/mcoverlayfs; $(MAKE) install)
@(cd executer/user; $(MAKE) install)
@case "$(TARGET)" in \
attached-mic | builtin-x86 | builtin-mic | smp-x86) \
(cd kernel; make install) \
attached-mic | builtin-x86 | builtin-mic | smp-x86 | smp-arm64) \
(cd kernel; $(MAKE) install) \
;; \
*) \
echo "unknown target $(TARGET)" >&2 \
@ -31,29 +40,20 @@ install::
;; \
esac
@case "$(TARGET)" in \
attached-mic) \
smp-x86 | smp-arm64) \
mkdir -p -m 755 $(SBINDIR); \
install -m 755 arch/x86/tools/mcreboot-attached-mic.sh $(SBINDIR)/mcreboot; \
install -m 755 arch/x86/tools/mcshutdown-attached-mic.sh $(SBINDIR)/mcshutdown; \
mkdir -p -m 755 $(MANDIR)/man1; \
install -m 644 arch/x86/tools/mcreboot.1 $(MANDIR)/man1/mcreboot.1; \
;; \
builtin-x86) \
mkdir -p -m 755 $(SBINDIR); \
install -m 755 arch/x86/tools/mcreboot-builtin-x86.sh $(SBINDIR)/mcreboot; \
install -m 755 arch/x86/tools/mcshutdown-builtin-x86.sh $(SBINDIR)/mcshutdown; \
mkdir -p -m 755 $(MANDIR)/man1; \
install -m 644 arch/x86/tools/mcreboot.1 $(MANDIR)/man1/mcreboot.1; \
;; \
smp-x86) \
mkdir -p -m 755 $(SBINDIR); \
install -m 755 arch/x86/tools/mcreboot-smp-x86.sh $(SBINDIR)/mcreboot.sh; \
install -m 755 arch/x86/tools/mcstop+release-smp-x86.sh $(SBINDIR)/mcstop+release.sh; \
install -m 755 arch/x86_64/tools/mcreboot-smp-x86.sh $(SBINDIR)/mcreboot.sh; \
install -m 755 arch/x86_64/tools/mcstop+release-smp-x86.sh $(SBINDIR)/mcstop+release.sh; \
install -m 755 arch/x86_64/tools/mcoverlay-destroy-smp-x86.sh $(SBINDIR)/mcoverlay-destroy.sh; \
install -m 755 arch/x86_64/tools/mcoverlay-create-smp-x86.sh $(SBINDIR)/mcoverlay-create.sh; \
install -m 755 arch/x86_64/tools/eclair-dump-backtrace.exp $(SBINDIR)/eclair-dump-backtrace.exp;\
mkdir -p -m 755 $(ETCDIR); \
install -m 644 arch/x86/tools/irqbalance_mck.service $(ETCDIR)/irqbalance_mck.service; \
install -m 644 arch/x86/tools/irqbalance_mck.in $(ETCDIR)/irqbalance_mck.in; \
install -m 644 arch/x86_64/tools/irqbalance_mck.service $(ETCDIR)/irqbalance_mck.service; \
install -m 644 arch/x86_64/tools/irqbalance_mck.in $(ETCDIR)/irqbalance_mck.in; \
mkdir -p -m 755 $(INCDIR); \
install -m 644 kernel/include/swapfmt.h $(INCDIR); \
mkdir -p -m 755 $(MANDIR)/man1; \
install -m 644 arch/x86/tools/mcreboot.1 $(MANDIR)/man1/mcreboot.1; \
install -m 644 arch/x86_64/tools/mcreboot.1 $(MANDIR)/man1/mcreboot.1; \
;; \
*) \
echo "unknown target $(TARGET)" >&2 \
@ -61,13 +61,13 @@ install::
;; \
esac
clean::
@(cd executer/kernel/mcctrl; make clean)
@(cd executer/kernel/mcoverlayfs; make clean)
@(cd executer/user; make clean)
clean:
@(cd executer/kernel/mcctrl; $(MAKE) clean)
@(cd executer/kernel/mcoverlayfs; $(MAKE) clean)
@(cd executer/user; $(MAKE) clean)
@case "$(TARGET)" in \
attached-mic | builtin-x86 | builtin-mic | smp-x86) \
(cd kernel; make clean) \
attached-mic | builtin-x86 | builtin-mic | smp-x86 | smp-arm64) \
(cd kernel; $(MAKE) clean) \
;; \
*) \
echo "unknown target $(TARGET)" >&2 \

View File

@ -0,0 +1,28 @@
# Makefile.arch COPYRIGHT FUJITSU LIMITED 2015-2017
VDSO_SRCDIR = $(SRC)/../arch/$(IHKARCH)/kernel/vdso
VDSO_BUILDDIR = @abs_builddir@/vdso
VDSO_SO_O = $(O)/vdso.so.o
IHK_OBJS += assert.o cache.o cpu.o cputable.o context.o entry.o entry-fpsimd.o
IHK_OBJS += fault.o head.o hyp-stub.o local.o perfctr.o perfctr_armv8pmu.o proc.o proc-macros.o
IHK_OBJS += psci.o smp.o trampoline.o traps.o fpsimd.o
IHK_OBJS += debug-monitors.o hw_breakpoint.o ptrace.o
IHK_OBJS += $(notdir $(VDSO_SO_O)) memory.o syscall.o vdso.o
IHK_OBJS += irq-gic-v2.o irq-gic-v3.o
IHK_OBJS += memcpy.o memset.o
IHK_OBJS += cpufeature.o
# POSTK_DEBUG_ARCH_DEP_18 coredump arch separation.
# IHK_OBJS added coredump.o
IHK_OBJS += coredump.o
$(VDSO_SO_O): $(VDSO_BUILDDIR)/vdso.so
$(VDSO_BUILDDIR)/vdso.so: FORCE
$(call echo_cmd,BUILD VDSO,$(TARGET))
@mkdir -p $(O)/vdso
@TARGETDIR="$(TARGETDIR)" $(submake) -C $(VDSO_BUILDDIR) $(SUBOPTS) prepare
@TARGETDIR="$(TARGETDIR)" $(submake) -C $(VDSO_BUILDDIR) $(SUBOPTS)
FORCE:

View File

@ -0,0 +1,52 @@
/* assert.c COPYRIGHT FUJITSU LIMITED 2015-2017 */
#include <process.h>
#include <list.h>
#include <ihk/debug.h>
#include <ihk/context.h>
#include <asm-offsets.h>
#include <cputable.h>
#include <thread_info.h>
#include <smp.h>
#include <ptrace.h>
/* assert for struct pt_regs member offset & size define */
STATIC_ASSERT(offsetof(struct pt_regs, regs[0]) == S_X0);
STATIC_ASSERT(offsetof(struct pt_regs, regs[1]) == S_X1);
STATIC_ASSERT(offsetof(struct pt_regs, regs[2]) == S_X2);
STATIC_ASSERT(offsetof(struct pt_regs, regs[3]) == S_X3);
STATIC_ASSERT(offsetof(struct pt_regs, regs[4]) == S_X4);
STATIC_ASSERT(offsetof(struct pt_regs, regs[5]) == S_X5);
STATIC_ASSERT(offsetof(struct pt_regs, regs[6]) == S_X6);
STATIC_ASSERT(offsetof(struct pt_regs, regs[7]) == S_X7);
STATIC_ASSERT(offsetof(struct pt_regs, regs[30]) == S_LR);
STATIC_ASSERT(offsetof(struct pt_regs, sp) == S_SP);
STATIC_ASSERT(offsetof(struct pt_regs, pc) == S_PC);
STATIC_ASSERT(offsetof(struct pt_regs, pstate) == S_PSTATE);
STATIC_ASSERT(offsetof(struct pt_regs, orig_x0) == S_ORIG_X0);
STATIC_ASSERT(offsetof(struct pt_regs, syscallno) == S_SYSCALLNO);
STATIC_ASSERT(sizeof(struct pt_regs) == S_FRAME_SIZE);
/* assert for struct cpu_info member offset & size define */
STATIC_ASSERT(offsetof(struct cpu_info, cpu_setup) == CPU_INFO_SETUP);
STATIC_ASSERT(sizeof(struct cpu_info) == CPU_INFO_SZ);
/* assert for struct thread_info member offset define */
STATIC_ASSERT(offsetof(struct thread_info, flags) == TI_FLAGS);
STATIC_ASSERT(offsetof(struct thread_info, cpu_context) == TI_CPU_CONTEXT);
/* assert for arch depend kernel stack size and common kernel stack pages */
STATIC_ASSERT((KERNEL_STACK_SIZE * 2) < (KERNEL_STACK_NR_PAGES * PAGE_SIZE));
/* assert for struct secondary_data member offset define */
STATIC_ASSERT(offsetof(struct secondary_data, stack) == SECONDARY_DATA_STACK);
STATIC_ASSERT(offsetof(struct secondary_data, next_pc) == SECONDARY_DATA_NEXT_PC);
STATIC_ASSERT(offsetof(struct secondary_data, arg) == SECONDARY_DATA_ARG);
/* assert for sve defines */
/* @ref.impl arch/arm64/kernel/signal.c::BUILD_BUG_ON in the init_user_layout */
STATIC_ASSERT(sizeof(struct sigcontext) - offsetof(struct sigcontext, __reserved) > ALIGN_UP(sizeof(struct _aarch64_ctx), 16));
STATIC_ASSERT(sizeof(struct sigcontext) - offsetof(struct sigcontext, __reserved) -
ALIGN_UP(sizeof(struct _aarch64_ctx), 16) > sizeof(struct extra_context));
STATIC_ASSERT(SVE_PT_FPSIMD_OFFSET == sizeof(struct user_sve_header));
STATIC_ASSERT(SVE_PT_SVE_OFFSET == sizeof(struct user_sve_header));

39
arch/arm64/kernel/cache.S Normal file
View File

@ -0,0 +1,39 @@
/* cache.S COPYRIGHT FUJITSU LIMITED 2015 */
#include <linkage.h>
#include "proc-macros.S"
/*
* __inval_cache_range(start, end)
* - start - start address of region
* - end - end address of region
*/
ENTRY(__inval_cache_range)
/* FALLTHROUGH */
/*
* __dma_inv_range(start, end)
* - start - virtual start address of region
* - end - virtual end address of region
*/
__dma_inv_range:
dcache_line_size x2, x3
sub x3, x2, #1
tst x1, x3 // end cache line aligned?
bic x1, x1, x3
b.eq 1f
dc civac, x1 // clean & invalidate D / U line
1: tst x0, x3 // start cache line aligned?
bic x0, x0, x3
b.eq 2f
dc civac, x0 // clean & invalidate D / U line
b 3f
2: dc ivac, x0 // invalidate D / U line
3: add x0, x0, x2
cmp x0, x1
b.lo 2b
dsb sy
ret
ENDPROC(__inval_cache_range)
ENDPROC(__dma_inv_range)

191
arch/arm64/kernel/context.c Normal file
View File

@ -0,0 +1,191 @@
/* context.c COPYRIGHT FUJITSU LIMITED 2015-2017 */
#include <ihk/context.h>
#include <ihk/debug.h>
#include <thread_info.h>
#include <cputype.h>
#include <mmu_context.h>
#include <arch-memory.h>
#include <irqflags.h>
#include <lwk/compiler.h>
#include <bitops.h>
/* @ref.impl arch/arm64/include/asm/mmu_context.h::MAX_ASID_BITS */
#define MAX_ASID_BITS 16
#define ASID_FIRST_VERSION (1 << MAX_ASID_BITS)
#define ASID_MASK ((1 << MAX_ASID_BITS) - 1)
#define VERSION_MASK (0xFFFF << MAX_ASID_BITS)
/* @ref.impl arch/arm64/mm/context.c::asid_bits */
#define asid_bits(reg) \
(((read_cpuid(ID_AA64MMFR0_EL1) & 0xf0) >> 2) + 8)
#define MAX_CTX_NR (1UL << MAX_ASID_BITS)
DECLARE_BITMAP(mmu_context_bmap, MAX_CTX_NR) = { 1 }; /* context number 0 reserved. */
/* cpu_asid lock */
static ihk_spinlock_t cpu_asid_lock = SPIN_LOCK_UNLOCKED;
/* last allocation ASID, initialized by 0x0001_0000 */
static unsigned int cpu_last_asid = ASID_FIRST_VERSION;
/* @ref.impl arch/arm64/mm/context.c::set_mm_context */
/* set asid for kernel_context_t.context */
static void set_mm_context(struct page_table *pgtbl, unsigned int asid)
{
unsigned int context = get_address_space_id(pgtbl);
if (likely((context ^ cpu_last_asid) >> MAX_ASID_BITS)) {
set_address_space_id(pgtbl, asid);
}
}
/* @ref.impl arch/arm64/mm/context.c::__new_context */
/* ASID allocation for new process function */
static inline void __new_context(struct page_table *pgtbl)
{
unsigned int asid;
unsigned int bits = asid_bits();
unsigned long flags;
unsigned int context = get_address_space_id(pgtbl);
unsigned long index = 0;
flags = ihk_mc_spinlock_lock(&cpu_asid_lock);
/* already assigned context number? */
if (!unlikely((context ^ cpu_last_asid) >> MAX_ASID_BITS)) {
/* true, unnecessary assigned context number */
ihk_mc_spinlock_unlock(&cpu_asid_lock, flags);
return;
}
/* false, necessary assigned context number */
/* search from the previous assigned number */
index = (cpu_last_asid & ASID_MASK) + 1;
asid = find_next_zero_bit(mmu_context_bmap, MAX_CTX_NR, index);
/* upper limit exceeded */
if (asid >= (1 << bits)) {
/* re assigned context number, search from 1 */
asid = find_next_zero_bit(mmu_context_bmap, index, 1);
/* upper previous assigned number, goto panic */
if (unlikely(asid >= index)) {
ihk_mc_spinlock_unlock(&cpu_asid_lock, flags);
panic("__new_context(): PANIC: Context Number Depletion.\n");
}
}
/* set assigned context number bitmap */
mmu_context_bmap[asid >> 6] |= (1UL << (asid & 63));
/* set previous assigned context number */
cpu_last_asid = asid | (cpu_last_asid & VERSION_MASK);
set_mm_context(pgtbl, cpu_last_asid);
ihk_mc_spinlock_unlock(&cpu_asid_lock, flags);
}
void free_mmu_context(struct page_table *pgtbl)
{
unsigned int context = get_address_space_id(pgtbl);
unsigned int nr = context & ASID_MASK;
unsigned long flags = ihk_mc_spinlock_lock(&cpu_asid_lock);
/* clear used context number bitmap */
mmu_context_bmap[nr >> 6] &= ~(1UL << (nr & 63));
ihk_mc_spinlock_unlock(&cpu_asid_lock, flags);
}
/* set ttbr0 assembler code extern */
/* in arch/arm64/kernel/proc.S */
extern void *cpu_do_switch_mm(translation_table_t* tt_pa, unsigned int asid);
/* @ref.impl arch/arm64/include/asm/mmu_context.h::switch_new_context */
/* ASID allocation for new process */
static inline void switch_new_context(struct page_table *pgtbl)
{
unsigned long flags;
translation_table_t* tt_pa;
unsigned int context;
/* ASID allocation */
__new_context(pgtbl);
context = get_address_space_id(pgtbl);
/* disable interrupt save */
flags = cpu_disable_interrupt_save();
tt_pa = get_translation_table_as_paddr(pgtbl);
cpu_do_switch_mm(tt_pa, context & ASID_MASK);
/* interrupt restore */
cpu_restore_interrupt(flags);
}
/* @ref.impl arch/arm64/include/asm/mmu_context.h::check_and_switch_context */
/* ASID allocation */
void switch_mm(struct page_table *pgtbl)
{
unsigned int context = get_address_space_id(pgtbl);
/* During switch_mm, you want to disable the TTBR */
cpu_set_reserved_ttbr0();
/* check new process or existing process */
if (!((context ^ cpu_last_asid) >> MAX_ASID_BITS)) {
translation_table_t* tt_pa;
/* for existing process */
tt_pa = get_translation_table_as_paddr(pgtbl);
cpu_do_switch_mm(tt_pa, context & ASID_MASK);
/* TODO: tif_switch_mm / after context switch */
// } else if (irqs_disabled()) {
// /*
// * Defer the new ASID allocation until after the context
// * switch critical region since __new_context() cannot be
// * called with interrupts disabled.
// */
// set_ti_thread_flag(task_thread_info(tsk), TIF_SWITCH_MM);
} else {
/* for new process */
/* ASID allocation & set ttbr0 */
switch_new_context(pgtbl);
}
}
/* context switch assembler code extern */
/* in arch/arm64/kernel/entry.S */
extern void *cpu_switch_to(struct thread_info *prev, struct thread_info *next, void *prev_proc);
/* context switch C function */
/* TODO: fpreg etc.. save & restore */
static inline void *switch_to(struct thread_info *prev,
struct thread_info *next,
void *prev_proc)
{
void *last = NULL;
next->cpu = ihk_mc_get_processor_id();
last = cpu_switch_to(prev, next, prev_proc);
return last;
}
/* common unit I/F, for context switch */
void *ihk_mc_switch_context(ihk_mc_kernel_context_t *old_ctx,
ihk_mc_kernel_context_t *new_ctx,
void *prev)
{
struct thread_info *prev_ti = NULL;
struct thread_info *next_ti = NULL;
/* get next thread_info addr */
next_ti = new_ctx->thread;
if (likely(old_ctx)) {
/* get prev thread_info addr */
prev_ti = old_ctx->thread;
}
/* switch next thread_info & process */
return switch_to(prev_ti, next_ti, prev);
}

View File

@ -0,0 +1,194 @@
/* copy_template.S COPYRIGHT FUJITSU LIMITED 2017 */
/*
* Copyright (C) 2013 ARM Ltd.
* Copyright (C) 2013 Linaro.
*
* This code is based on glibc cortex strings work originally authored by Linaro
* and re-licensed under GPLv2 for the Linux kernel. The original code can
* be found @
*
* http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/
* files/head:/src/aarch64/
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* Copy a buffer from src to dest (alignment handled by the hardware)
*
* Parameters:
* x0 - dest
* x1 - src
* x2 - n
* Returns:
* x0 - dest
*/
dstin .req x0
src .req x1
count .req x2
tmp1 .req x3
tmp1w .req w3
tmp2 .req x4
tmp2w .req w4
dst .req x6
A_l .req x7
A_h .req x8
B_l .req x9
B_h .req x10
C_l .req x11
C_h .req x12
D_l .req x13
D_h .req x14
mov dst, dstin
cmp count, #16
/*When memory length is less than 16, the accessed are not aligned.*/
b.lo .Ltiny15
neg tmp2, src
ands tmp2, tmp2, #15/* Bytes to reach alignment. */
b.eq .LSrcAligned
sub count, count, tmp2
/*
* Copy the leading memory data from src to dst in an increasing
* address order.By this way,the risk of overwritting the source
* memory data is eliminated when the distance between src and
* dst is less than 16. The memory accesses here are alignment.
*/
tbz tmp2, #0, 1f
ldrb1 tmp1w, src, #1
strb1 tmp1w, dst, #1
1:
tbz tmp2, #1, 2f
ldrh1 tmp1w, src, #2
strh1 tmp1w, dst, #2
2:
tbz tmp2, #2, 3f
ldr1 tmp1w, src, #4
str1 tmp1w, dst, #4
3:
tbz tmp2, #3, .LSrcAligned
ldr1 tmp1, src, #8
str1 tmp1, dst, #8
.LSrcAligned:
cmp count, #64
b.ge .Lcpy_over64
/*
* Deal with small copies quickly by dropping straight into the
* exit block.
*/
.Ltail63:
/*
* Copy up to 48 bytes of data. At this point we only need the
* bottom 6 bits of count to be accurate.
*/
ands tmp1, count, #0x30
b.eq .Ltiny15
cmp tmp1w, #0x20
b.eq 1f
b.lt 2f
ldp1 A_l, A_h, src, #16
stp1 A_l, A_h, dst, #16
1:
ldp1 A_l, A_h, src, #16
stp1 A_l, A_h, dst, #16
2:
ldp1 A_l, A_h, src, #16
stp1 A_l, A_h, dst, #16
.Ltiny15:
/*
* Prefer to break one ldp/stp into several load/store to access
* memory in an increasing address order,rather than to load/store 16
* bytes from (src-16) to (dst-16) and to backward the src to aligned
* address,which way is used in original cortex memcpy. If keeping
* the original memcpy process here, memmove need to satisfy the
* precondition that src address is at least 16 bytes bigger than dst
* address,otherwise some source data will be overwritten when memove
* call memcpy directly. To make memmove simpler and decouple the
* memcpy's dependency on memmove, withdrew the original process.
*/
tbz count, #3, 1f
ldr1 tmp1, src, #8
str1 tmp1, dst, #8
1:
tbz count, #2, 2f
ldr1 tmp1w, src, #4
str1 tmp1w, dst, #4
2:
tbz count, #1, 3f
ldrh1 tmp1w, src, #2
strh1 tmp1w, dst, #2
3:
tbz count, #0, .Lexitfunc
ldrb1 tmp1w, src, #1
strb1 tmp1w, dst, #1
b .Lexitfunc
.Lcpy_over64:
subs count, count, #128
b.ge .Lcpy_body_large
/*
* Less than 128 bytes to copy, so handle 64 here and then jump
* to the tail.
*/
ldp1 A_l, A_h, src, #16
stp1 A_l, A_h, dst, #16
ldp1 B_l, B_h, src, #16
ldp1 C_l, C_h, src, #16
stp1 B_l, B_h, dst, #16
stp1 C_l, C_h, dst, #16
ldp1 D_l, D_h, src, #16
stp1 D_l, D_h, dst, #16
tst count, #0x3f
b.ne .Ltail63
b .Lexitfunc
/*
* Critical loop. Start at a new cache line boundary. Assuming
* 64 bytes per line this ensures the entire loop is in one line.
*/
.p2align L1_CACHE_SHIFT
.Lcpy_body_large:
/* pre-get 64 bytes data. */
ldp1 A_l, A_h, src, #16
ldp1 B_l, B_h, src, #16
ldp1 C_l, C_h, src, #16
ldp1 D_l, D_h, src, #16
1:
/*
* interlace the load of next 64 bytes data block with store of the last
* loaded 64 bytes data.
*/
stp1 A_l, A_h, dst, #16
ldp1 A_l, A_h, src, #16
stp1 B_l, B_h, dst, #16
ldp1 B_l, B_h, src, #16
stp1 C_l, C_h, dst, #16
ldp1 C_l, C_h, src, #16
stp1 D_l, D_h, dst, #16
ldp1 D_l, D_h, src, #16
subs count, count, #64
b.ge 1b
stp1 A_l, A_h, dst, #16
stp1 B_l, B_h, dst, #16
stp1 C_l, C_h, dst, #16
stp1 D_l, D_h, dst, #16
tst count, #0x3f
b.ne .Ltail63
.Lexitfunc:

View File

@ -0,0 +1,35 @@
/* coredump.c COPYRIGHT FUJITSU LIMITED 2015-2016 */
#ifdef POSTK_DEBUG_ARCH_DEP_18 /* coredump arch separation. */
#include <process.h>
#include <elfcore.h>
#include <string.h>
void arch_fill_prstatus(struct elf_prstatus64 *prstatus, struct thread *thread, void *regs0)
{
struct pt_regs *regs = regs0;
struct elf_prstatus64 tmp_prstatus;
/*
We ignore following entries for now.
struct elf_siginfo pr_info;
short int pr_cursig;
a8_uint64_t pr_sigpend;
a8_uint64_t pr_sighold;
pid_t pr_pid;
pid_t pr_ppid;
pid_t pr_pgrp;
pid_t pr_sid;
struct prstatus64_timeval pr_utime;
struct prstatus64_timeval pr_stime;
struct prstatus64_timeval pr_cutime;
struct prstatus64_timeval pr_cstime;
*/
/* copy x0-30, sp, pc, pstate */
memcpy(&tmp_prstatus.pr_reg, &regs->user_regs, sizeof(tmp_prstatus.pr_reg));
tmp_prstatus.pr_fpvalid = 0; /* We assume no fp */
/* copy unaligned prstatus addr */
memcpy(prstatus, &tmp_prstatus, sizeof(*prstatus));
}
#endif /* POSTK_DEBUG_ARCH_DEP_18 */

1636
arch/arm64/kernel/cpu.c Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,14 @@
/* cputable.c COPYRIGHT FUJITSU LIMITED 2015 */
#include <cputable.h>
extern unsigned long __cpu_setup(void);
struct cpu_info cpu_table[] = {
{
.cpu_id_val = 0x000f0000,
.cpu_id_mask = 0x000f0000,
.cpu_name = "AArch64 Processor",
.cpu_setup = __cpu_setup,
},
{ /* Empty */ },
};

View File

@ -0,0 +1,110 @@
/* debug-monitors.c COPYRIGHT FUJITSU LIMITED 2016-2017 */
#include <cputype.h>
#include <irqflags.h>
#include <ihk/context.h>
#include <ihk/debug.h>
#include <signal.h>
#include <errno.h>
#include <debug-monitors.h>
#include <cls.h>
#include <thread_info.h>
/* @ref.impl arch/arm64/kernel/debug-monitors.c::debug_monitors_arch */
/* Determine debug architecture. */
unsigned char debug_monitors_arch(void)
{
return read_cpuid(ID_AA64DFR0_EL1) & 0xf;
}
/* @ref.impl arch/arm64/kernel/debug-monitors.c::mdscr_write */
void mdscr_write(unsigned int mdscr)
{
unsigned long flags = local_dbg_save();
asm volatile("msr mdscr_el1, %0" :: "r" (mdscr));
local_dbg_restore(flags);
}
/* @ref.impl arch/arm64/kernel/debug-monitors.c::mdscr_read */
unsigned int mdscr_read(void)
{
unsigned int mdscr;
asm volatile("mrs %0, mdscr_el1" : "=r" (mdscr));
return mdscr;
}
/* @ref.impl arch/arm64/kernel/debug-monitors.c::clear_os_lock */
static void clear_os_lock(void)
{
asm volatile("msr oslar_el1, %0" : : "r" (0));
}
/* @ref.impl arch/arm64/kernel/debug-monitors.c::debug_monitors_init */
void debug_monitors_init(void)
{
clear_os_lock();
}
/* @ref.impl arch/arm64/kernel/debug-monitors.c::set_regs_spsr_ss */
void set_regs_spsr_ss(struct pt_regs *regs)
{
unsigned long spsr;
spsr = regs->pstate;
spsr &= ~DBG_SPSR_SS;
spsr |= DBG_SPSR_SS;
regs->pstate = spsr;
}
/* @ref.impl arch/arm64/kernel/debug-monitors.c::set_regs_spsr_ss */
void clear_regs_spsr_ss(struct pt_regs *regs)
{
unsigned long spsr;
spsr = regs->pstate;
spsr &= ~DBG_SPSR_SS;
regs->pstate = spsr;
}
extern int interrupt_from_user(void *);
extern void clear_single_step(struct thread *thread);
/* @ref.impl arch/arm64/kernel/debug-monitors.c::single_step_handler */
int single_step_handler(unsigned long addr, unsigned int esr, struct pt_regs *regs)
{
siginfo_t info;
int ret = -EFAULT;
if (interrupt_from_user(regs)) {
info.si_signo = SIGTRAP;
info.si_errno = 0;
info.si_code = TRAP_HWBKPT;
info._sifields._sigfault.si_addr = (void *)regs->pc;
set_signal(SIGTRAP, regs, &info);
clear_single_step(cpu_local_var(current));
ret = 0;
} else {
kprintf("Unexpected kernel single-step exception at EL1\n");
}
return ret;
}
/* @ref.impl arch/arm64/kernel/debug-monitors.c::brk_handler */
int brk_handler(unsigned long addr, unsigned int esr, struct pt_regs *regs)
{
siginfo_t info;
int ret = -EFAULT;
if (interrupt_from_user(regs)) {
info.si_signo = SIGTRAP;
info.si_errno = 0;
info.si_code = TRAP_BRKPT;
info._sifields._sigfault.si_addr = (void *)regs->pc;
set_signal(SIGTRAP, regs, &info);
ret = 0;
} else {
kprintf("Unexpected kernel BRK exception at EL1\n");
}
return ret;
}

View File

@ -0,0 +1,126 @@
/* entry-fpsimd.S COPYRIGHT FUJITSU LIMITED 2015-2017 */
#include <linkage.h>
#include <assembler.h>
#include <fpsimdmacros.h>
/*
* @ref.impl linux-linaro/arch/arm64/include/asm/fpsimdmacros.h
*/
/*
* FP/SIMD state saving and restoring macros
*
* Copyright (C) 2012 ARM Ltd.
* Author: Catalin Marinas <catalin.marinas@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
.macro fpsimd_save state, tmpnr
stp q0, q1, [\state, #16 * 0]
stp q2, q3, [\state, #16 * 2]
stp q4, q5, [\state, #16 * 4]
stp q6, q7, [\state, #16 * 6]
stp q8, q9, [\state, #16 * 8]
stp q10, q11, [\state, #16 * 10]
stp q12, q13, [\state, #16 * 12]
stp q14, q15, [\state, #16 * 14]
stp q16, q17, [\state, #16 * 16]
stp q18, q19, [\state, #16 * 18]
stp q20, q21, [\state, #16 * 20]
stp q22, q23, [\state, #16 * 22]
stp q24, q25, [\state, #16 * 24]
stp q26, q27, [\state, #16 * 26]
stp q28, q29, [\state, #16 * 28]
stp q30, q31, [\state, #16 * 30]!
mrs x\tmpnr, fpsr
str w\tmpnr, [\state, #16 * 2]
mrs x\tmpnr, fpcr
str w\tmpnr, [\state, #16 * 2 + 4]
.endm
.macro fpsimd_restore_fpcr state, tmp
/*
* Writes to fpcr may be self-synchronising, so avoid restoring
* the register if it hasn't changed.
*/
mrs \tmp, fpcr
cmp \tmp, \state
b.eq 9999f
msr fpcr, \state
9999:
.endm
/* Clobbers \state */
.macro fpsimd_restore state, tmpnr
ldp q0, q1, [\state, #16 * 0]
ldp q2, q3, [\state, #16 * 2]
ldp q4, q5, [\state, #16 * 4]
ldp q6, q7, [\state, #16 * 6]
ldp q8, q9, [\state, #16 * 8]
ldp q10, q11, [\state, #16 * 10]
ldp q12, q13, [\state, #16 * 12]
ldp q14, q15, [\state, #16 * 14]
ldp q16, q17, [\state, #16 * 16]
ldp q18, q19, [\state, #16 * 18]
ldp q20, q21, [\state, #16 * 20]
ldp q22, q23, [\state, #16 * 22]
ldp q24, q25, [\state, #16 * 24]
ldp q26, q27, [\state, #16 * 26]
ldp q28, q29, [\state, #16 * 28]
ldp q30, q31, [\state, #16 * 30]!
ldr w\tmpnr, [\state, #16 * 2]
msr fpsr, x\tmpnr
ldr w\tmpnr, [\state, #16 * 2 + 4]
fpsimd_restore_fpcr x\tmpnr, \state
.endm
/*
* @ref.impl linux-linaro/arch/arm64/kernel/entry-fpsimd.S
*/
/*
* Save the FP registers.
*
* x0 - pointer to struct fpsimd_state
*/
ENTRY(fpsimd_save_state)
fpsimd_save x0, 8
ret
ENDPROC(fpsimd_save_state)
/*
* Load the FP registers.
*
* x0 - pointer to struct fpsimd_state
*/
ENTRY(fpsimd_load_state)
fpsimd_restore x0, 8
ret
ENDPROC(fpsimd_load_state)
#ifdef CONFIG_ARM64_SVE
ENTRY(sve_save_state)
sve_save 0, x1, 2
ret
ENDPROC(sve_save_state)
ENTRY(sve_load_state)
sve_load 0, x1, x2, 3
ret
ENDPROC(sve_load_state)
ENTRY(sve_get_vl)
_zrdvl 0, 1
ret
ENDPROC(sve_get_vl)
#endif /* CONFIG_ARM64_SVE */

558
arch/arm64/kernel/entry.S Normal file
View File

@ -0,0 +1,558 @@
/* entry.S COPYRIGHT FUJITSU LIMITED 2015-2017 */
#include <linkage.h>
#include <assembler.h>
#include <asm-offsets.h>
#include <esr.h>
#include <thread_info.h>
/*
* Bad Abort numbers
*-----------------
*/
#define BAD_SYNC 0
#define BAD_IRQ 1
#define BAD_FIQ 2
#define BAD_ERROR 3
.macro kernel_entry, el, regsize = 64
sub sp, sp, #S_FRAME_SIZE
.if \regsize == 32
mov w0, w0 // zero upper 32 bits of x0
.endif
stp x0, x1, [sp, #16 * 0]
stp x2, x3, [sp, #16 * 1]
stp x4, x5, [sp, #16 * 2]
stp x6, x7, [sp, #16 * 3]
stp x8, x9, [sp, #16 * 4]
stp x10, x11, [sp, #16 * 5]
stp x12, x13, [sp, #16 * 6]
stp x14, x15, [sp, #16 * 7]
stp x16, x17, [sp, #16 * 8]
stp x18, x19, [sp, #16 * 9]
stp x20, x21, [sp, #16 * 10]
stp x22, x23, [sp, #16 * 11]
stp x24, x25, [sp, #16 * 12]
stp x26, x27, [sp, #16 * 13]
stp x28, x29, [sp, #16 * 14]
.if \el == 0
mrs x21, sp_el0
get_thread_info tsk // Ensure MDSCR_EL1.SS is clear,
ldr x19, [tsk, #TI_FLAGS] // since we can unmask debug
disable_step_tsk x19, x20 // exceptions when scheduling.
.else
add x21, sp, #S_FRAME_SIZE
.endif
mrs x22, elr_el1
mrs x23, spsr_el1
#if defined(CONFIG_HAS_NMI)
mrs_s x20, ICC_PMR_EL1 // Get PMR
and x20, x20, #ICC_PMR_EL1_G_BIT // Extract mask bit
lsl x20, x20, #PSR_G_PMR_G_SHIFT // Shift to a PSTATE RES0 bit
eor x20, x20, #PSR_G_BIT // Invert bit
orr x23, x20, x23 // Store PMR within PSTATE
mov x20, #ICC_PMR_EL1_MASKED
msr_s ICC_PMR_EL1, x20 // Mask normal interrupts at PMR
#endif /* defined(CONFIG_HAS_NMI) */
stp lr, x21, [sp, #S_LR]
stp x22, x23, [sp, #S_PC]
/*
* Set syscallno to -1 by default (overridden later if real syscall).
*/
.if \el == 0
mvn x21, xzr
str x21, [sp, #S_SYSCALLNO]
.endif
/*
* Registers that may be useful after this macro is invoked:
*
* x21 - aborted SP
* x22 - aborted PC
* x23 - aborted PSTATE
*/
.endm
.macro kernel_exit, el, need_enable_step = 0
.if \el == 0
mov x0, #0
mov x1, sp
mov x2, #0
bl check_signal // check whether the signal is delivered
bl check_need_resched // or reschedule is needed.
mov x0, #0
mov x1, sp
mov x2, #0
bl check_signal_irq_disabled // check whether the signal is delivered(for kernel_exit)
.endif
disable_irq x1 // disable interrupts
.if \need_enable_step == 1
ldr x1, [tsk, #TI_FLAGS]
enable_step_tsk x1, x2
.endif
disable_nmi
ldp x21, x22, [sp, #S_PC] // load ELR, SPSR
.if \el == 0
// ct_user_enter // McKernel, disable (debugcode?)
ldr x23, [sp, #S_SP] // load return stack pointer
msr sp_el0, x23
.endif
#if defined(CONFIG_HAS_NMI)
and x20, x22, #PSR_G_BIT // Get stolen PSTATE bit
and x22, x22, #~PSR_G_BIT // Clear stolen bit
lsr x20, x20, #PSR_G_PMR_G_SHIFT // Shift back to PMR mask
eor x20, x20, #ICC_PMR_EL1_UNMASKED // x20 gets 0xf0 or 0xb0
msr_s ICC_PMR_EL1, x20 // Write to PMR
#endif /* defined(CONFIG_HAS_NMI) */
msr elr_el1, x21 // set up the return data
msr spsr_el1, x22
ldp x0, x1, [sp, #16 * 0]
ldp x2, x3, [sp, #16 * 1]
ldp x4, x5, [sp, #16 * 2]
ldp x6, x7, [sp, #16 * 3]
ldp x8, x9, [sp, #16 * 4]
ldp x10, x11, [sp, #16 * 5]
ldp x12, x13, [sp, #16 * 6]
ldp x14, x15, [sp, #16 * 7]
ldp x16, x17, [sp, #16 * 8]
ldp x18, x19, [sp, #16 * 9]
ldp x20, x21, [sp, #16 * 10]
ldp x22, x23, [sp, #16 * 11]
ldp x24, x25, [sp, #16 * 12]
ldp x26, x27, [sp, #16 * 13]
ldp x28, x29, [sp, #16 * 14]
ldr lr, [sp, #S_LR]
add sp, sp, #S_FRAME_SIZE // restore sp
eret // return to kernel
.endm
.macro get_thread_info, rd
mov \rd, sp
and \rd, \rd, #~(KERNEL_STACK_SIZE - 1) // top of stack
.endm
/*
* These are the registers used in the syscall handler, and allow us to
* have in theory up to 7 arguments to a function - x0 to x6.
*
* x7 is reserved for the system call number in 32-bit mode.
*/
sc_nr .req x25 // number of system calls
scno .req x26 // syscall number
stbl .req x27 // syscall table pointer
tsk .req x28 // current thread_info
/*
* Interrupt handling.
*/
.macro irq_handler
adrp x1, handle_arch_irq
ldr x1, [x1, #:lo12:handle_arch_irq]
mov x0, sp
blr x1
.endm
.text
/*
* Exception vectors.
*/
.align 11
ENTRY(vectors)
ventry el1_sync_invalid // Synchronous EL1t
ventry el1_irq_invalid // IRQ EL1t
ventry el1_fiq_invalid // FIQ EL1t
ventry el1_error_invalid // Error EL1t
ventry el1_sync // Synchronous EL1h
ventry el1_irq // IRQ EL1h
ventry el1_fiq_invalid // FIQ EL1h
ventry el1_error_invalid // Error EL1h
ventry el0_sync // Synchronous 64-bit EL0
ventry el0_irq // IRQ 64-bit EL0
ventry el0_fiq_invalid // FIQ 64-bit EL0
ventry el0_error_invalid // Error 64-bit EL0
ventry el0_sync_invalid // Synchronous 32-bit EL0
ventry el0_irq_invalid // IRQ 32-bit EL0
ventry el0_fiq_invalid // FIQ 32-bit EL0
ventry el0_error_invalid // Error 32-bit EL0
END(vectors)
/*
* Invalid mode handlers
*/
.macro inv_entry, el, reason, regsize = 64
kernel_entry el, \regsize
mov x0, sp
mov x1, #\reason
mrs x2, esr_el1
enable_nmi
.if \el == 0
bl bad_mode
b ret_to_user
.else
b bad_mode
.endif
.endm
el0_sync_invalid:
inv_entry 0, BAD_SYNC
ENDPROC(el0_sync_invalid)
el0_irq_invalid:
inv_entry 0, BAD_IRQ
ENDPROC(el0_irq_invalid)
el0_fiq_invalid:
inv_entry 0, BAD_FIQ
ENDPROC(el0_fiq_invalid)
el0_error_invalid:
inv_entry 0, BAD_ERROR
ENDPROC(el0_error_invalid)
el1_sync_invalid:
inv_entry 1, BAD_SYNC
ENDPROC(el1_sync_invalid)
el1_irq_invalid:
inv_entry 1, BAD_IRQ
ENDPROC(el1_irq_invalid)
el1_fiq_invalid:
inv_entry 1, BAD_FIQ
ENDPROC(el1_fiq_invalid)
el1_error_invalid:
inv_entry 1, BAD_ERROR
ENDPROC(el1_error_invalid)
/*
* EL1 mode handlers.
*/
.align 6
el1_sync:
kernel_entry 1
mrs x1, esr_el1 // read the syndrome register
lsr x24, x1, #ESR_ELx_EC_SHIFT // exception class
cmp x24, #ESR_ELx_EC_DABT_CUR // data abort in EL1
b.eq el1_da
// cmp x24, #ESR_ELx_EC_IABT_CUR // instruction abort in EL1
// b.eq el1_ia
cmp x24, #ESR_ELx_EC_SYS64 // configurable trap
b.eq el1_undef
cmp x24, #ESR_ELx_EC_SP_ALIGN // stack alignment exception
b.eq el1_sp_pc
cmp x24, #ESR_ELx_EC_PC_ALIGN // pc alignment exception
b.eq el1_sp_pc
cmp x24, #ESR_ELx_EC_UNKNOWN // unknown exception in EL1
b.eq el1_undef
// cmp x24, #ESR_ELx_EC_BREAKPT_CUR // debug exception in EL1
// b.ge el1_dbg
b el1_inv
el1_ia:
/*
* Fall through to the Data abort case
*/
el1_da:
/*
* Data abort handling
*/
mrs x0, far_el1
enable_nmi
enable_dbg
#if defined(CONFIG_HAS_NMI)
# define PSR_INTR_SHIFT PSR_G_SHIFT // PSR_G_BIT
#else /* defined(CONFIG_HAS_NMI) */
# define PSR_INTR_SHIFT 7 // PSR_I_BIT
#endif /* defined(CONFIG_HAS_NMI) */
// re-enable interrupts if they were enabled in the aborted context
tbnz x23, #PSR_INTR_SHIFT, 1f
enable_irq x2
1:
mov x2, sp // struct pt_regs
bl do_mem_abort
// disable interrupts before pulling preserved data off the stack
kernel_exit 1
el1_sp_pc:
/*
* Stack or PC alignment exception handling
*/
mrs x0, far_el1
enable_nmi
enable_dbg
mov x2, sp
b do_sp_pc_abort
el1_undef:
/*
* Undefined instruction
*/
enable_nmi
enable_dbg
mov x0, sp
b do_undefinstr
// el1_dbg:
// /*
// * Debug exception handling
// */
// cmp x24, #ESR_ELx_EC_BRK64 // if BRK64
// cinc x24, x24, eq // set bit '0'
// tbz x24, #0, el1_inv // EL1 only
// mrs x0, far_el1
// mov x2, sp // struct pt_regs
// bl do_debug_exception
// kernel_exit 1
el1_inv:
// TODO: add support for undefined instructions in kernel mode
mov x0, sp
mov x1, #BAD_SYNC
mrs x2, esr_el1
enable_nmi
enable_dbg
b bad_mode
ENDPROC(el1_sync)
/*
* EL1 mode handlers.
*/
.align 6
el1_irq:
kernel_entry 1
enable_dbg
irq_handler
kernel_exit 1
ENDPROC(el1_irq)
/*
* EL0 mode handlers.
*/
.align 6
el0_sync:
kernel_entry 0
mrs x25, esr_el1 // read the syndrome register
lsr x24, x25, #ESR_ELx_EC_SHIFT // exception class
cmp x24, #ESR_ELx_EC_SVC64 // SVC in 64-bit state
b.eq el0_svc
cmp x24, #ESR_ELx_EC_DABT_LOW // data abort in EL0
b.eq el0_da
cmp x24, #ESR_ELx_EC_IABT_LOW // instruction abort in EL0
b.eq el0_ia
cmp x24, #ESR_ELx_EC_FP_ASIMD // FP/ASIMD access
b.eq el0_fpsimd_acc
#ifdef CONFIG_ARM64_SVE
cmp x24, #ESR_ELx_EC_SVE // SVE access
b.eq el0_sve_acc
#endif
cmp x24, #ESR_ELx_EC_FP_EXC64 // FP/ASIMD exception
b.eq el0_fpsimd_exc
cmp x24, #ESR_ELx_EC_SYS64 // configurable trap
b.eq el0_undef
cmp x24, #ESR_ELx_EC_SP_ALIGN // stack alignment exception
b.eq el0_sp_pc
cmp x24, #ESR_ELx_EC_PC_ALIGN // pc alignment exception
b.eq el0_sp_pc
cmp x24, #ESR_ELx_EC_UNKNOWN // unknown exception in EL0
b.eq el0_undef
cmp x24, #ESR_ELx_EC_BREAKPT_LOW // debug exception in EL0
b.ge el0_dbg
b el0_inv
el0_svc:
uxtw scno, w8 // syscall number in w8
stp x0, scno, [sp, #S_ORIG_X0] // save the original x0 and syscall number
enable_nmi
enable_dbg_and_irq x0
adrp x16, __arm64_syscall_handler
ldr x16, [x16, #:lo12:__arm64_syscall_handler]
mov x0, scno
mov x1, sp
blr x16 // __arm64_syscall_handler(int, syscall_num, ihk_mc_user_context_t *uctx);
/* Signal check has been completed at the stage of came back. */
b ret_fast_syscall
el0_da:
/*
* Data abort handling
*/
mrs x26, far_el1
// enable interrupts before calling the main handler
enable_nmi
enable_dbg_and_irq x0
// ct_user_exit
bic x0, x26, #(0xff << 56)
mov x1, x25
mov x2, sp
bl do_mem_abort
b ret_to_user
el0_ia:
/*
* Instruction abort handling
*/
mrs x26, far_el1
// enable interrupts before calling the main handler
enable_nmi
enable_dbg_and_irq x0
// ct_user_exit
mov x0, x26
mov x1, x25
mov x2, sp
bl do_mem_abort
b ret_to_user
el0_fpsimd_acc:
/*
* Floating Point or Advanced SIMD access
*/
enable_nmi
enable_dbg
// ct_user_exit
mov x0, x25
mov x1, sp
bl do_fpsimd_acc
b ret_to_user
#ifdef CONFIG_ARM64_SVE
/*
* Scalable Vector Extension access
*/
el0_sve_acc:
enable_nmi
enable_dbg
// ct_user_exit
mov x0, x25
mov x1, sp
bl do_sve_acc
b ret_to_user
#endif
el0_fpsimd_exc:
/*
* Floating Point, Advanced SIMD or SVE exception
*/
enable_nmi
enable_dbg
// ct_user_exit
mov x0, x25
mov x1, sp
bl do_fpsimd_exc
b ret_to_user
el0_sp_pc:
/*
* Stack or PC alignment exception handling
*/
mrs x26, far_el1
// enable interrupts before calling the main handler
enable_nmi
enable_dbg_and_irq x0
mov x0, x26
mov x1, x25
mov x2, sp
bl do_sp_pc_abort
b ret_to_user
el0_undef:
/*
* Undefined instruction
*/
// enable interrupts before calling the main handler
enable_nmi
enable_dbg_and_irq x0
// ct_user_exit
mov x0, sp
bl do_undefinstr
b ret_to_user
el0_dbg:
/*
* Debug exception handling
*/
tbnz x24, #0, el0_inv // EL0 only
mrs x0, far_el1
mov x1, x25
mov x2, sp
enable_nmi
bl do_debug_exception
enable_dbg
// ct_user_exit
b ret_to_user
el0_inv:
enable_dbg
mov x0, sp
mov x1, #BAD_SYNC
mrs x2, esr_el1
enable_nmi
bl bad_mode
b ret_to_user
ENDPROC(el0_sync)
.align 6
el0_irq:
kernel_entry 0
enable_dbg
irq_handler
b ret_to_user
ENDPROC(el0_irq)
/*
* Register switch for AArch64. The callee-saved registers need to be saved
* and restored. On entry:
* x0 = previous task_struct (must be preserved across the switch)
* x1 = next task_struct
* Previous and next are guaranteed not to be the same.
*
*/
ENTRY(cpu_switch_to)
cmp x0, xzr // for idle process branch(skip save)
b.eq 1f
add x8, x0, #TI_CPU_CONTEXT
mov x9, sp
stp x19, x20, [x8], #16 // store callee-saved registers
stp x21, x22, [x8], #16
stp x23, x24, [x8], #16
stp x25, x26, [x8], #16
stp x27, x28, [x8], #16
stp x29, x9, [x8], #16
str lr, [x8]
1: add x8, x1, #TI_CPU_CONTEXT
ldp x19, x20, [x8], #16 // restore callee-saved registers
ldp x21, x22, [x8], #16
ldp x23, x24, [x8], #16
ldp x25, x26, [x8], #16
ldp x27, x28, [x8], #16
ldp x29, x9, [x8], #16
ldr lr, [x8]
mov sp, x9
mov x0, x2 // return void *prev
ret
ENDPROC(cpu_switch_to)
ret_fast_syscall:
kernel_exit 0, 1
ENDPROC(ret_fast_syscall)
/*
* "slow" syscall return path.
*/
ret_to_user:
no_work_pending:
kernel_exit 0, 1
ENDPROC(ret_to_user)
/*
* This is how we return from a fork.
*/
ENTRY(ret_from_fork)
// bl schedule_tail
cbz x19, 1f // not a kernel thread
mov x0, x20
blr x19
1: get_thread_info tsk
bl release_runq_lock
b ret_to_user
ENDPROC(ret_from_fork)
/* TODO: skeleton for rusage */
ENTRY(__freeze)
ENDPROC(__freeze)

289
arch/arm64/kernel/fault.c Normal file
View File

@ -0,0 +1,289 @@
/* fault.c COPYRIGHT FUJITSU LIMITED 2015-2017 */
#include <ihk/context.h>
#include <ihk/debug.h>
#include <ptrace.h>
#include <esr.h>
#include <signal.h>
#include <arch-memory.h>
#include <thread_info.h>
#include <syscall.h>
#include <debug-monitors.h>
unsigned long __page_fault_handler_address;
extern int interrupt_from_user(void *);
void set_signal(int sig, void *regs, struct siginfo *info);
static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *regs);
static int do_page_fault(unsigned long addr, unsigned int esr, struct pt_regs *regs);
static int do_translation_fault(unsigned long addr, unsigned int esr, struct pt_regs *regs);
static int do_bad(unsigned long addr, unsigned int esr, struct pt_regs *regs);
static int do_alignment_fault(unsigned long addr, unsigned int esr, struct pt_regs *regs);
static struct fault_info {
int (*fn)(unsigned long addr, unsigned int esr, struct pt_regs *regs);
int sig;
int code;
const char *name;
} fault_info[] = {
{ do_bad, SIGBUS, 0, "ttbr address size fault" },
{ do_bad, SIGBUS, 0, "level 1 address size fault" },
{ do_bad, SIGBUS, 0, "level 2 address size fault" },
{ do_bad, SIGBUS, 0, "level 3 address size fault" },
{ do_translation_fault, SIGSEGV, SEGV_MAPERR, "level 0 translation fault" },
{ do_translation_fault, SIGSEGV, SEGV_MAPERR, "level 1 translation fault" },
{ do_translation_fault, SIGSEGV, SEGV_MAPERR, "level 2 translation fault" },
{ do_page_fault, SIGSEGV, SEGV_MAPERR, "level 3 translation fault" },
{ do_bad, SIGBUS, 0, "unknown 8" },
{ do_page_fault, SIGSEGV, SEGV_ACCERR, "level 1 access flag fault" },
{ do_page_fault, SIGSEGV, SEGV_ACCERR, "level 2 access flag fault" },
{ do_page_fault, SIGSEGV, SEGV_ACCERR, "level 3 access flag fault" },
{ do_bad, SIGBUS, 0, "unknown 12" },
{ do_page_fault, SIGSEGV, SEGV_ACCERR, "level 1 permission fault" },
{ do_page_fault, SIGSEGV, SEGV_ACCERR, "level 2 permission fault" },
{ do_page_fault, SIGSEGV, SEGV_ACCERR, "level 3 permission fault" },
{ do_bad, SIGBUS, 0, "synchronous external abort" },
{ do_bad, SIGBUS, 0, "unknown 17" },
{ do_bad, SIGBUS, 0, "unknown 18" },
{ do_bad, SIGBUS, 0, "unknown 19" },
{ do_bad, SIGBUS, 0, "synchronous external abort (translation table walk)" },
{ do_bad, SIGBUS, 0, "synchronous external abort (translation table walk)" },
{ do_bad, SIGBUS, 0, "synchronous external abort (translation table walk)" },
{ do_bad, SIGBUS, 0, "synchronous external abort (translation table walk)" },
{ do_bad, SIGBUS, 0, "synchronous parity error" },
{ do_bad, SIGBUS, 0, "unknown 25" },
{ do_bad, SIGBUS, 0, "unknown 26" },
{ do_bad, SIGBUS, 0, "unknown 27" },
{ do_bad, SIGBUS, 0, "synchronous parity error (translation table walk)" },
{ do_bad, SIGBUS, 0, "synchronous parity error (translation table walk)" },
{ do_bad, SIGBUS, 0, "synchronous parity error (translation table walk)" },
{ do_bad, SIGBUS, 0, "synchronous parity error (translation table walk)" },
{ do_bad, SIGBUS, 0, "unknown 32" },
{ do_alignment_fault, SIGBUS, BUS_ADRALN, "alignment fault" },
{ do_bad, SIGBUS, 0, "unknown 34" },
{ do_bad, SIGBUS, 0, "unknown 35" },
{ do_bad, SIGBUS, 0, "unknown 36" },
{ do_bad, SIGBUS, 0, "unknown 37" },
{ do_bad, SIGBUS, 0, "unknown 38" },
{ do_bad, SIGBUS, 0, "unknown 39" },
{ do_bad, SIGBUS, 0, "unknown 40" },
{ do_bad, SIGBUS, 0, "unknown 41" },
{ do_bad, SIGBUS, 0, "unknown 42" },
{ do_bad, SIGBUS, 0, "unknown 43" },
{ do_bad, SIGBUS, 0, "unknown 44" },
{ do_bad, SIGBUS, 0, "unknown 45" },
{ do_bad, SIGBUS, 0, "unknown 46" },
{ do_bad, SIGBUS, 0, "unknown 47" },
{ do_bad, SIGBUS, 0, "TLB conflict abort" },
{ do_bad, SIGBUS, 0, "unknown 49" },
{ do_bad, SIGBUS, 0, "unknown 50" },
{ do_bad, SIGBUS, 0, "unknown 51" },
{ do_bad, SIGBUS, 0, "implementation fault (lockdown abort)" },
{ do_bad, SIGBUS, 0, "implementation fault (unsupported exclusive)" },
{ do_bad, SIGBUS, 0, "unknown 54" },
{ do_bad, SIGBUS, 0, "unknown 55" },
{ do_bad, SIGBUS, 0, "unknown 56" },
{ do_bad, SIGBUS, 0, "unknown 57" },
{ do_bad, SIGBUS, 0, "unknown 58" },
{ do_bad, SIGBUS, 0, "unknown 59" },
{ do_bad, SIGBUS, 0, "unknown 60" },
{ do_bad, SIGBUS, 0, "section domain fault" },
{ do_bad, SIGBUS, 0, "page domain fault" },
{ do_bad, SIGBUS, 0, "unknown 63" },
};
static const char *fault_name(unsigned int esr)
{
const struct fault_info *inf = fault_info + (esr & 63);
return inf->name;
}
/*
* Dispatch a data abort to the relevant handler.
*/
void do_mem_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
{
const struct fault_info *inf = fault_info + (esr & 63);
struct siginfo info;
/* set_cputime called in inf->fn() */
if (!inf->fn(addr, esr, regs))
return;
set_cputime(interrupt_from_user(regs)? 1: 2);
kprintf("Unhandled fault: %s (0x%08x) at 0x%016lx\n", inf->name, esr, addr);
info.si_signo = inf->sig;
info.si_errno = 0;
info.si_code = inf->code;
info._sifields._sigfault.si_addr = (void*)addr;
arm64_notify_die("", regs, &info, esr);
set_cputime(0);
}
/*
* Handle stack alignment exceptions.
*/
void do_sp_pc_abort(unsigned long addr, unsigned int esr, struct pt_regs *regs)
{
struct siginfo info;
set_cputime(interrupt_from_user(regs)? 1: 2);
info.si_signo = SIGBUS;
info.si_errno = 0;
info.si_code = BUS_ADRALN;
info._sifields._sigfault.si_addr = (void*)addr;
arm64_notify_die("", regs, &info, esr);
set_cputime(0);
}
static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *regs)
{
struct siginfo info;
set_cputime(interrupt_from_user(regs) ? 1: 2);
/*
* If we are in kernel mode at this point, we have no context to
* handle this fault with.
*/
if (interrupt_from_user(regs)) {
kprintf("unhandled %s (%d) at 0x%08lx, esr 0x%03x\n",
fault_name(esr), SIGSEGV, addr, esr);
current_thread_info()->fault_address = addr;
current_thread_info()->fault_code = esr;
info.si_signo = SIGSEGV;
info.si_errno = 0;
info.si_code = SEGV_MAPERR;
info._sifields._sigfault.si_addr = (void *)addr;
set_signal(SIGSEGV, regs, &info);
} else {
kprintf("Unable to handle kernel %s at virtual address %08lx\n",
(addr < PAGE_SIZE) ? "NULL pointer dereference" : "paging request", addr);
panic("OOps.");
}
set_cputime(0);
}
static int is_el0_instruction_abort(unsigned int esr)
{
return ESR_ELx_EC(esr) == ESR_ELx_EC_IABT_LOW;
}
static int do_page_fault(unsigned long addr, unsigned int esr,
struct pt_regs *regs)
{
void (*page_fault_handler)(void *, uint64_t, void *);
uint64_t reason = 0;
int esr_ec_dfsc = (esr & 63);
if (interrupt_from_user(regs)) {
reason |= PF_USER;
}
if (is_el0_instruction_abort(esr)) {
reason |= PF_INSTR;
} else if ((esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM)) {
reason |= PF_WRITE;
if (13 <= esr_ec_dfsc && esr_ec_dfsc <= 15 ) {
/* level [1-3] permission fault */
reason |= PF_PROT;
}
}
page_fault_handler = (void *)__page_fault_handler_address;
(*page_fault_handler)((void *)addr, reason, regs);
return 0;
}
/*
* First Level Translation Fault Handler
*
* We enter here because the first level page table doesn't contain a valid
* entry for the address.
*
* If the address is in kernel space (>= TASK_SIZE), then we are probably
* faulting in the vmalloc() area.
*
* If the init_task's first level page tables contains the relevant entry, we
* copy the it to this task. If not, we send the process a signal, fixup the
* exception, or oops the kernel.
*
* NOTE! We MUST NOT take any locks for this case. We may be in an interrupt
* or a critical region, and should only copy the information from the master
* page table, nothing more.
*/
static int do_translation_fault(unsigned long addr,
unsigned int esr,
struct pt_regs *regs)
{
if (addr < USER_END)
return do_page_fault(addr, esr, regs);
do_bad_area(addr, esr, regs);
return 0;
}
static int do_alignment_fault(unsigned long addr, unsigned int esr,
struct pt_regs *regs)
{
do_bad_area(addr, esr, regs);
return 0;
}
extern int breakpoint_handler(unsigned long unused, unsigned int esr, struct pt_regs *regs);
extern int single_step_handler(unsigned long addr, unsigned int esr, struct pt_regs *regs);
extern int watchpoint_handler(unsigned long addr, unsigned int esr, struct pt_regs *regs);
extern int brk_handler(unsigned long addr, unsigned int esr, struct pt_regs *regs);
static struct fault_info debug_fault_info[] = {
{ breakpoint_handler, SIGTRAP, TRAP_HWBKPT, "hw-breakpoint handler" },
{ single_step_handler, SIGTRAP, TRAP_HWBKPT, "single-step handler" },
{ watchpoint_handler, SIGTRAP, TRAP_HWBKPT, "hw-watchpoint handler" },
{ do_bad, SIGBUS, 0, "unknown 3" },
{ do_bad, SIGTRAP, TRAP_BRKPT, "aarch32 BKPT" },
{ do_bad, SIGTRAP, 0, "aarch32 vector catch" },
{ brk_handler, SIGTRAP, TRAP_BRKPT, "ptrace BRK handler" },
{ do_bad, SIGBUS, 0, "unknown 7" },
};
int do_debug_exception(unsigned long addr, unsigned int esr, struct pt_regs *regs)
{
const struct fault_info *inf = debug_fault_info + DBG_ESR_EVT(esr);
struct siginfo info;
int from_user = interrupt_from_user(regs);
int ret = -1;
set_cputime(from_user ? 1: 2);
if (!inf->fn(addr, esr, regs)) {
ret = 1;
goto out;
}
kprintf("Unhandled debug exception: %s (0x%08x) at 0x%016lx\n",
inf->name, esr, addr);
info.si_signo = inf->sig;
info.si_errno = 0;
info.si_code = inf->code;
info._sifields._sigfault.si_addr = (void *)addr;
arm64_notify_die("", regs, &info, 0);
ret = 0;
out:
set_cputime(0);
return ret;
}
/*
* This abort handler always returns "fault".
*/
static int do_bad(unsigned long addr, unsigned int esr, struct pt_regs *regs)
{
set_cputime(interrupt_from_user(regs) ? 1: 2);
set_cputime(0);
return 1;
}

325
arch/arm64/kernel/fpsimd.c Normal file
View File

@ -0,0 +1,325 @@
/* fpsimd.c COPYRIGHT FUJITSU LIMITED 2016-2017 */
#include <thread_info.h>
#include <fpsimd.h>
#include <cpuinfo.h>
#include <lwk/compiler.h>
#include <ikc/ihk.h>
#include <hwcap.h>
#include <cls.h>
#include <prctl.h>
#include <cpufeature.h>
#include <kmalloc.h>
//#define DEBUG_PRINT_FPSIMD
#ifdef DEBUG_PRINT_FPSIMD
#define dkprintf kprintf
#define ekprintf kprintf
#else
#define dkprintf(...) do { if (0) kprintf(__VA_ARGS__); } while (0)
#define ekprintf kprintf
#endif
#define BUG_ON(condition) do { if (condition) { kprintf("PANIC: %s: %s(line:%d)\n",\
__FILE__, __FUNCTION__, __LINE__); panic(""); } } while(0)
#ifdef CONFIG_ARM64_SVE
/* Maximum supported vector length across all CPUs (initially poisoned) */
int sve_max_vl = -1;
/* Default VL for tasks that don't set it explicitly: */
int sve_default_vl = -1;
size_t sve_state_size(struct thread const *thread)
{
unsigned int vl = thread->ctx.thread->sve_vl;
BUG_ON(!sve_vl_valid(vl));
return SVE_SIG_REGS_SIZE(sve_vq_from_vl(vl));
}
void sve_free(struct thread *thread)
{
if (thread->ctx.thread->sve_state) {
kfree(thread->ctx.thread->sve_state);
thread->ctx.thread->sve_state = NULL;
}
}
void sve_alloc(struct thread *thread)
{
if (thread->ctx.thread->sve_state) {
return;
}
thread->ctx.thread->sve_state =
kmalloc(sve_state_size(thread), IHK_MC_AP_NOWAIT);
BUG_ON(!thread->ctx.thread->sve_state);
memset(thread->ctx.thread->sve_state, 0, sve_state_size(thread));
}
static int get_nr_threads(struct process *proc)
{
struct thread *child;
struct mcs_rwlock_node_irqsave lock;
int nr_threads = 0;
mcs_rwlock_reader_lock(&proc->threads_lock, &lock);
list_for_each_entry(child, &proc->threads_list, siblings_list){
nr_threads++;
}
mcs_rwlock_reader_unlock(&proc->threads_lock, &lock);
return nr_threads;
}
extern void save_fp_regs(struct thread *thread);
extern void clear_fp_regs(struct thread *thread);
extern void restore_fp_regs(struct thread *thread);
/* @ref.impl arch/arm64/kernel/fpsimd.c::sve_set_vector_length */
int sve_set_vector_length(struct thread *thread,
unsigned long vl, unsigned long flags)
{
struct thread_info *ti = thread->ctx.thread;
BUG_ON(thread == cpu_local_var(current) && cpu_local_var(no_preempt) == 0);
/*
* To avoid accidents, forbid setting for individual threads of a
* multithreaded process. User code that knows what it's doing can
* pass PR_SVE_SET_VL_THREAD to override this restriction:
*/
if (!(flags & PR_SVE_SET_VL_THREAD) && get_nr_threads(thread->proc) != 1) {
return -EINVAL;
}
flags &= ~(unsigned long)PR_SVE_SET_VL_THREAD;
if (flags & ~(unsigned long)(PR_SVE_SET_VL_INHERIT |
PR_SVE_SET_VL_ONEXEC)) {
return -EINVAL;
}
if (!sve_vl_valid(vl)) {
return -EINVAL;
}
if (vl > sve_max_vl) {
BUG_ON(!sve_vl_valid(sve_max_vl));
vl = sve_max_vl;
}
if (flags & (PR_SVE_SET_VL_ONEXEC |
PR_SVE_SET_VL_INHERIT)) {
ti->sve_vl_onexec = vl;
} else {
/* Reset VL to system default on next exec: */
ti->sve_vl_onexec = 0;
}
/* Only actually set the VL if not deferred: */
if (flags & PR_SVE_SET_VL_ONEXEC) {
goto out;
}
if (vl != ti->sve_vl) {
if ((elf_hwcap & HWCAP_SVE)) {
fp_regs_struct fp_regs;
memset(&fp_regs, 0, sizeof(fp_regs));
/* for self at prctl syscall */
if (thread == cpu_local_var(current)) {
save_fp_regs(thread);
clear_fp_regs(thread);
thread_sve_to_fpsimd(thread, &fp_regs);
sve_free(thread);
ti->sve_vl = vl;
sve_alloc(thread);
thread_fpsimd_to_sve(thread, &fp_regs);
restore_fp_regs(thread);
/* for target thread at ptrace */
} else {
thread_sve_to_fpsimd(thread, &fp_regs);
sve_free(thread);
ti->sve_vl = vl;
sve_alloc(thread);
thread_fpsimd_to_sve(thread, &fp_regs);
}
}
}
ti->sve_vl = vl;
out:
ti->sve_flags = flags & PR_SVE_SET_VL_INHERIT;
return 0;
}
/* @ref.impl arch/arm64/kernel/fpsimd.c::sve_prctl_status */
/*
* Encode the current vector length and flags for return.
* This is only required for prctl(): ptrace has separate fields
*/
static int sve_prctl_status(const struct thread_info *ti)
{
int ret = ti->sve_vl;
ret |= ti->sve_flags << 16;
return ret;
}
/* @ref.impl arch/arm64/kernel/fpsimd.c::sve_set_task_vl */
int sve_set_thread_vl(struct thread *thread, const unsigned long vector_length,
const unsigned long flags)
{
int ret;
if (!(elf_hwcap & HWCAP_SVE)) {
return -EINVAL;
}
BUG_ON(thread != cpu_local_var(current));
preempt_disable();
ret = sve_set_vector_length(thread, vector_length, flags);
preempt_enable();
if (ret) {
return ret;
}
return sve_prctl_status(thread->ctx.thread);
}
/* @ref.impl arch/arm64/kernel/fpsimd.c::sve_get_ti_vl */
int sve_get_thread_vl(const struct thread *thread)
{
if (!(elf_hwcap & HWCAP_SVE)) {
return -EINVAL;
}
return sve_prctl_status(thread->ctx.thread);
}
void do_sve_acc(unsigned int esr, struct pt_regs *regs)
{
kprintf("PANIC: CPU: %d PID: %d ESR: %x Trapped SVE access.\n",
ihk_mc_get_processor_id(), cpu_local_var(current)->proc->pid, esr);
panic("");
}
void init_sve_vl(void)
{
extern unsigned long ihk_param_default_vl;
uint64_t zcr;
if (unlikely(!(elf_hwcap & HWCAP_SVE))) {
return;
}
zcr = read_system_reg(SYS_ZCR_EL1);
BUG_ON(((zcr & ZCR_EL1_LEN_MASK) + 1) * 16 > sve_max_vl);
sve_max_vl = ((zcr & ZCR_EL1_LEN_MASK) + 1) * 16;
sve_default_vl = ihk_param_default_vl;
if (sve_default_vl == 0) {
kprintf("SVE: Getting default VL = 0 from HOST-Linux.\n");
sve_default_vl = sve_max_vl > 64 ? 64 : sve_max_vl;
kprintf("SVE: Using default vl(%d byte).\n", sve_default_vl);
}
kprintf("SVE: maximum available vector length %u bytes per vector\n",
sve_max_vl);
kprintf("SVE: default vector length %u bytes per vector\n",
sve_default_vl);
}
#else /* CONFIG_ARM64_SVE */
void init_sve_vl(void)
{
/* nothing to do. */
}
#endif /* CONFIG_ARM64_SVE */
/* @ref.impl arch/arm64/kernel/fpsimd.c::__task_pffr */
static void *__thread_pffr(struct thread *thread)
{
unsigned int vl = thread->ctx.thread->sve_vl;
BUG_ON(!sve_vl_valid(vl));
return (char *)thread->ctx.thread->sve_state + 34 * vl;
}
/* There is a need to call from to check the HWCAP_FP and HWCAP_ASIMD state. */
void thread_fpsimd_load(struct thread *thread)
{
if (likely(elf_hwcap & HWCAP_SVE)) {
unsigned int vl = thread->ctx.thread->sve_vl;
BUG_ON(!sve_vl_valid(vl));
sve_load_state(__thread_pffr(thread), &thread->fp_regs->fpsr, sve_vq_from_vl(vl) - 1);
dkprintf("sve for TID %d restored\n", thread->tid);
} else {
// Load the current FPSIMD state to memory.
fpsimd_load_state(thread->fp_regs);
dkprintf("fp_regs for TID %d restored\n", thread->tid);
}
}
/* There is a need to call from to check the HWCAP_FP and HWCAP_ASIMD state. */
void thread_fpsimd_save(struct thread *thread)
{
if (likely(elf_hwcap & HWCAP_SVE)) {
sve_save_state(__thread_pffr(thread), &thread->fp_regs->fpsr);
dkprintf("sve for TID %d saved\n", thread->tid);
} else {
// Save the current FPSIMD state to memory.
fpsimd_save_state(thread->fp_regs);
dkprintf("fp_regs for TID %d saved\n", thread->tid);
}
}
/* @ref.impl arch/arm64/kernel/fpsimd.c::__task_fpsimd_to_sve */
static void __thread_fpsimd_to_sve(struct thread *thread, fp_regs_struct *fp_regs, unsigned int vq)
{
struct fpsimd_sve_state(vq) *sst = thread->ctx.thread->sve_state;
unsigned int i;
for (i = 0; i < 32; i++) {
sst->zregs[i][0] = fp_regs->vregs[i];
}
}
/* @ref.impl arch/arm64/kernel/fpsimd.c::task_fpsimd_to_sve */
void thread_fpsimd_to_sve(struct thread *thread, fp_regs_struct *fp_regs)
{
unsigned int vl = thread->ctx.thread->sve_vl;
BUG_ON(!sve_vl_valid(vl));
__thread_fpsimd_to_sve(thread, fp_regs, sve_vq_from_vl(vl));
}
/* @ref.impl arch/arm64/kernel/fpsimd.c::__task_sve_to_fpsimd */
static void __thread_sve_to_fpsimd(struct thread *thread, fp_regs_struct *fp_regs, unsigned int vq)
{
struct fpsimd_sve_state(vq) *sst = thread->ctx.thread->sve_state;
unsigned int i;
for (i = 0; i < 32; i++) {
fp_regs->vregs[i] = sst->zregs[i][0];
}
}
/* @ref.impl arch/arm64/kernel/fpsimd.c::task_sve_to_fpsimd */
void thread_sve_to_fpsimd(struct thread *thread, fp_regs_struct *fp_regs)
{
unsigned int vl = thread->ctx.thread->sve_vl;
BUG_ON(!sve_vl_valid(vl));
__thread_sve_to_fpsimd(thread, fp_regs, sve_vq_from_vl(vl));
}

472
arch/arm64/kernel/gencore.c Normal file
View File

@ -0,0 +1,472 @@
/* gencore.c COPYRIGHT FUJITSU LIMITED 2015-2016 */
#ifndef POSTK_DEBUG_ARCH_DEP_18 /* coredump arch separation. */
#include <ihk/debug.h>
#include <kmalloc.h>
#include <cls.h>
#include <list.h>
#include <process.h>
#include <string.h>
#include <elfcore.h>
#define align32(x) ((((x) + 3) / 4) * 4)
#define alignpage(x) ((((x) + (PAGE_SIZE) - 1) / (PAGE_SIZE)) * (PAGE_SIZE))
//#define DEBUG_PRINT_GENCORE
#ifdef DEBUG_PRINT_GENCORE
#define dkprintf(...) kprintf(__VA_ARGS__)
#define ekprintf(...) kprintf(__VA_ARGS__)
#else
#define dkprintf(...) do { if (0) kprintf(__VA_ARGS__); } while (0)
#define ekprintf(...) kprintf(__VA_ARGS__)
#endif
/*
* Generate a core file image, which consists of many chunks.
* Returns an allocated table, an etnry of which is a pair of the address
* of a chunk and its length.
*/
/**
* \brief Fill the elf header.
*
* \param eh An Elf64_Ehdr structure.
* \param segs Number of segments of the core file.
*/
void fill_elf_header(Elf64_Ehdr *eh, int segs)
{
eh->e_ident[EI_MAG0] = 0x7f;
eh->e_ident[EI_MAG1] = 'E';
eh->e_ident[EI_MAG2] = 'L';
eh->e_ident[EI_MAG3] = 'F';
eh->e_ident[EI_CLASS] = ELFCLASS64;
eh->e_ident[EI_DATA] = ELFDATA2LSB;
eh->e_ident[EI_VERSION] = El_VERSION;
eh->e_ident[EI_OSABI] = ELFOSABI_NONE;
eh->e_ident[EI_ABIVERSION] = El_ABIVERSION_NONE;
eh->e_type = ET_CORE;
#ifdef CONFIG_MIC
eh->e_machine = EM_K10M;
#else
eh->e_machine = EM_X86_64;
#endif
eh->e_version = EV_CURRENT;
eh->e_entry = 0; /* Do we really need this? */
eh->e_phoff = 64; /* fixed */
eh->e_shoff = 0; /* no section header */
eh->e_flags = 0;
eh->e_ehsize = 64; /* fixed */
eh->e_phentsize = 56; /* fixed */
eh->e_phnum = segs;
eh->e_shentsize = 0;
eh->e_shnum = 0;
eh->e_shstrndx = 0;
}
/**
* \brief Return the size of the prstatus entry of the NOTE segment.
*
*/
int get_prstatus_size(void)
{
return sizeof(struct note) + align32(sizeof("CORE"))
+ align32(sizeof(struct elf_prstatus64));
}
/**
* \brief Fill a prstatus structure.
*
* \param head A pointer to a note structure.
* \param thread A pointer to the current thread structure.
* \param regs0 A pointer to a x86_regs structure.
*/
void fill_prstatus(struct note *head, struct thread *thread, void *regs0)
{
/* TODO(pka_idle) */
}
/**
* \brief Return the size of the prpsinfo entry of the NOTE segment.
*
*/
int get_prpsinfo_size(void)
{
return sizeof(struct note) + align32(sizeof("CORE"))
+ align32(sizeof(struct elf_prpsinfo64));
}
/**
* \brief Fill a prpsinfo structure.
*
* \param head A pointer to a note structure.
* \param thread A pointer to the current thread structure.
* \param regs A pointer to a x86_regs structure.
*/
void fill_prpsinfo(struct note *head, struct thread *thread, void *regs)
{
void *name;
struct elf_prpsinfo64 *prpsinfo;
head->namesz = sizeof("CORE");
head->descsz = sizeof(struct elf_prpsinfo64);
head->type = NT_PRPSINFO;
name = (void *) (head + 1);
memcpy(name, "CORE", sizeof("CORE"));
prpsinfo = (struct elf_prpsinfo64 *)(name + align32(sizeof("CORE")));
prpsinfo->pr_state = thread->status;
prpsinfo->pr_pid = thread->proc->pid;
/*
We leave most of the fields unfilled.
char pr_sname;
char pr_zomb;
char pr_nice;
a8_uint64_t pr_flag;
unsigned int pr_uid;
unsigned int pr_gid;
int pr_ppid, pr_pgrp, pr_sid;
char pr_fname[16];
char pr_psargs[ELF_PRARGSZ];
*/
}
/**
* \brief Return the size of the AUXV entry of the NOTE segment.
*
*/
int get_auxv_size(void)
{
return sizeof(struct note) + align32(sizeof("CORE"))
+ sizeof(unsigned long) * AUXV_LEN;
}
/**
* \brief Fill an AUXV structure.
*
* \param head A pointer to a note structure.
* \param thread A pointer to the current thread structure.
* \param regs A pointer to a x86_regs structure.
*/
void fill_auxv(struct note *head, struct thread *thread, void *regs)
{
void *name;
void *auxv;
head->namesz = sizeof("CORE");
head->descsz = sizeof(unsigned long) * AUXV_LEN;
head->type = NT_AUXV;
name = (void *) (head + 1);
memcpy(name, "CORE", sizeof("CORE"));
auxv = name + align32(sizeof("CORE"));
memcpy(auxv, thread->proc->saved_auxv, sizeof(unsigned long) * AUXV_LEN);
}
/**
* \brief Return the size of the whole NOTE segment.
*
*/
int get_note_size(void)
{
return get_prstatus_size() + get_prpsinfo_size()
+ get_auxv_size();
}
/**
* \brief Fill the NOTE segment.
*
* \param head A pointer to a note structure.
* \param thread A pointer to the current thread structure.
* \param regs A pointer to a x86_regs structure.
*/
void fill_note(void *note, struct thread *thread, void *regs)
{
fill_prstatus(note, thread, regs);
note += get_prstatus_size();
fill_prpsinfo(note, thread, regs);
note += get_prpsinfo_size();
fill_auxv(note, thread, regs);
}
/**
* \brief Generate an image of the core file.
*
* \param thread A pointer to the current thread structure.
* \param regs A pointer to a x86_regs structure.
* \param coretable(out) An array of core chunks.
* \param chunks(out) Number of the entires of coretable.
*
* A core chunk is represented by a pair of a physical
* address of memory region and its size. If there are
* no corresponding physical address for a VM area
* (an unallocated demand-paging page, e.g.), the address
* should be zero.
*/
int gencore(struct thread *thread, void *regs,
struct coretable **coretable, int *chunks)
{
struct coretable *ct = NULL;
Elf64_Ehdr eh;
Elf64_Phdr *ph = NULL;
void *note = NULL;
struct vm_range *range, *next;
struct process_vm *vm = thread->vm;
int segs = 1; /* the first one is for NOTE */
int notesize, phsize, alignednotesize;
unsigned int offset = 0;
int i;
*chunks = 3; /* Elf header , header table and NOTE segment */
if (vm == NULL) {
dkprintf("no vm found.\n");
return -1;
}
next = lookup_process_memory_range(vm, 0, -1);
while ((range = next)) {
next = next_process_memory_range(vm, range);
dkprintf("start:%lx end:%lx flag:%lx objoff:%lx\n",
range->start, range->end, range->flag, range->objoff);
/* We omit reserved areas because they are only for
mckernel's internal use. */
if (range->flag & VR_RESERVED)
continue;
/* We need a chunk for each page for a demand paging area.
This can be optimized for spacial complexity but we would
lose simplicity instead. */
if (range->flag & VR_DEMAND_PAGING) {
unsigned long p, phys;
int prevzero = 0;
for (p = range->start; p < range->end; p += PAGE_SIZE) {
if (ihk_mc_pt_virt_to_phys(thread->vm->address_space->page_table,
(void *)p, &phys) != 0) {
prevzero = 1;
} else {
if (prevzero == 1)
(*chunks)++;
(*chunks)++;
prevzero = 0;
}
}
if (prevzero == 1)
(*chunks)++;
} else {
(*chunks)++;
}
segs++;
}
dkprintf("we have %d segs and %d chunks.\n\n", segs, *chunks);
{
struct vm_regions region = thread->vm->region;
dkprintf("text: %lx-%lx\n", region.text_start, region.text_end);
dkprintf("data: %lx-%lx\n", region.data_start, region.data_end);
dkprintf("brk: %lx-%lx\n", region.brk_start, region.brk_end);
dkprintf("map: %lx-%lx\n", region.map_start, region.map_end);
dkprintf("stack: %lx-%lx\n", region.stack_start, region.stack_end);
dkprintf("user: %lx-%lx\n\n", region.user_start, region.user_end);
}
dkprintf("now generate a core file image\n");
offset += sizeof(eh);
fill_elf_header(&eh, segs);
/* program header table */
phsize = sizeof(Elf64_Phdr) * segs;
ph = kmalloc(phsize, IHK_MC_AP_NOWAIT);
if (ph == NULL) {
dkprintf("could not alloc a program header table.\n");
goto fail;
}
memset(ph, 0, phsize);
offset += phsize;
/* NOTE segment
* To align the next segment page-sized, we prepare a padded
* region for our NOTE segment.
*/
notesize = get_note_size();
alignednotesize = alignpage(notesize + offset) - offset;
note = kmalloc(alignednotesize, IHK_MC_AP_NOWAIT);
if (note == NULL) {
dkprintf("could not alloc NOTE for core.\n");
goto fail;
}
memset(note, 0, alignednotesize);
fill_note(note, thread, regs);
/* prgram header for NOTE segment is exceptional */
ph[0].p_type = PT_NOTE;
ph[0].p_flags = 0;
ph[0].p_offset = offset;
ph[0].p_vaddr = 0;
ph[0].p_paddr = 0;
ph[0].p_filesz = notesize;
ph[0].p_memsz = notesize;
ph[0].p_align = 0;
offset += alignednotesize;
/* program header for each memory chunk */
i = 1;
next = lookup_process_memory_range(vm, 0, -1);
while ((range = next)) {
next = next_process_memory_range(vm, range);
unsigned long flag = range->flag;
unsigned long size = range->end - range->start;
if (range->flag & VR_RESERVED)
continue;
ph[i].p_type = PT_LOAD;
ph[i].p_flags = ((flag & VR_PROT_READ) ? PF_R : 0)
| ((flag & VR_PROT_WRITE) ? PF_W : 0)
| ((flag & VR_PROT_EXEC) ? PF_X : 0);
ph[i].p_offset = offset;
ph[i].p_vaddr = range->start;
ph[i].p_paddr = 0;
ph[i].p_filesz = size;
ph[i].p_memsz = size;
ph[i].p_align = PAGE_SIZE;
i++;
offset += size;
}
/* coretable to send to host */
ct = kmalloc(sizeof(struct coretable) * (*chunks), IHK_MC_AP_NOWAIT);
if (!ct) {
dkprintf("could not alloc a coretable.\n");
goto fail;
}
ct[0].addr = virt_to_phys(&eh); /* ELF header */
ct[0].len = 64;
dkprintf("coretable[0]: %lx@%lx(%lx)\n", ct[0].len, ct[0].addr, &eh);
ct[1].addr = virt_to_phys(ph); /* program header table */
ct[1].len = phsize;
dkprintf("coretable[1]: %lx@%lx(%lx)\n", ct[1].len, ct[1].addr, ph);
ct[2].addr = virt_to_phys(note); /* NOTE segment */
ct[2].len = alignednotesize;
dkprintf("coretable[2]: %lx@%lx(%lx)\n", ct[2].len, ct[2].addr, note);
i = 3; /* memory segments */
next = lookup_process_memory_range(vm, 0, -1);
while ((range = next)) {
next = next_process_memory_range(vm, range);
unsigned long phys;
if (range->flag & VR_RESERVED)
continue;
if (range->flag & VR_DEMAND_PAGING) {
/* Just an ad hoc kluge. */
unsigned long p, start, phys;
int prevzero = 0;
unsigned long size = 0;
for (start = p = range->start;
p < range->end; p += PAGE_SIZE) {
if (ihk_mc_pt_virt_to_phys(thread->vm->address_space->page_table,
(void *)p, &phys) != 0) {
if (prevzero == 0) {
/* We begin a new chunk */
size = PAGE_SIZE;
start = p;
} else {
/* We extend the previous chunk */
size += PAGE_SIZE;
}
prevzero = 1;
} else {
if (prevzero == 1) {
/* Flush out an empty chunk */
ct[i].addr = 0;
ct[i].len = size;
dkprintf("coretable[%d]: %lx@%lx(%lx)\n", i,
ct[i].len, ct[i].addr, start);
i++;
}
ct[i].addr = phys;
ct[i].len = PAGE_SIZE;
dkprintf("coretable[%d]: %lx@%lx(%lx)\n", i,
ct[i].len, ct[i].addr, p);
i++;
prevzero = 0;
}
}
if (prevzero == 1) {
/* An empty chunk */
ct[i].addr = 0;
ct[i].len = size;
dkprintf("coretable[%d]: %lx@%lx(%lx)\n", i,
ct[i].len, ct[i].addr, start);
i++;
}
} else {
if ((thread->vm->region.user_start <= range->start) &&
(range->end <= thread->vm->region.user_end)) {
if (ihk_mc_pt_virt_to_phys(thread->vm->address_space->page_table,
(void *)range->start, &phys) != 0) {
dkprintf("could not convert user virtual address %lx"
"to physical address", range->start);
goto fail;
}
} else {
phys = virt_to_phys((void *)range->start);
}
ct[i].addr = phys;
ct[i].len = range->end - range->start;
dkprintf("coretable[%d]: %lx@%lx(%lx)\n", i,
ct[i].len, ct[i].addr, range->start);
i++;
}
}
*coretable = ct;
return 0;
fail:
if (ct)
kfree(ct);
if (ph)
kfree(ph);
if (note)
kfree(note);
return -1;
}
/**
* \brief Free all the allocated spaces for an image of the core file.
*
* \param coretable An array of core chunks.
*/
void freecore(struct coretable **coretable)
{
struct coretable *ct = *coretable;
kfree(phys_to_virt(ct[2].addr)); /* NOTE segment */
kfree(phys_to_virt(ct[1].addr)); /* ph */
kfree(*coretable);
}
#endif /* !POSTK_DEBUG_ARCH_DEP_18 */

805
arch/arm64/kernel/head.S Normal file
View File

@ -0,0 +1,805 @@
/* head.S COPYRIGHT FUJITSU LIMITED 2015-2017 */
#include <linkage.h>
#include <ptrace.h>
#include <assembler.h>
#include <asm-offsets.h>
#include <virt.h>
#include <cache.h>
#include <arch-memory.h>
#include <smp.h>
#include <arm-gic-v3.h>
#define KERNEL_RAM_VADDR MAP_KERNEL_START
#define EARLY_ALLOC_VADDR MAP_EARLY_ALLOC
#define BOOT_PARAM_VADDR MAP_BOOT_PARAM
//#ifndef CONFIG_SMP
//# define PTE_FLAGS PTE_TYPE_PAGE | PTE_AF
//# define PMD_FLAGS PMD_TYPE_SECT | PMD_SECT_AF
//#else
# define PTE_FLAGS PTE_TYPE_PAGE | PTE_AF | PTE_SHARED
# define PMD_FLAGS PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S
//#endif /*CONFIG_SMP*/
#ifdef CONFIG_ARM64_64K_PAGES
# define MM_MMUFLAGS PTE_ATTRINDX(MT_NORMAL) | PTE_FLAGS
#else
# define MM_MMUFLAGS PMD_ATTRINDX(MT_NORMAL) | PMD_FLAGS
#endif
.macro pgtbl_init_core, name, dir, tbl, ents, virt_to_phys
ldr \tbl, =\name
ldr \ents, =\dir
add \tbl, \tbl, \virt_to_phys
str \ents, [\tbl]
add \tbl, \tbl, #8
add \ents, \ents, \virt_to_phys
str \ents, [\tbl]
.endm
.macro pgtbl_init, tbl, ents, virt_to_phys
pgtbl_init_core swapper_page_table, swapper_pg_dir, \tbl, \ents, \virt_to_phys
pgtbl_init_core idmap_page_table, idmap_pg_dir, \tbl, \ents, \virt_to_phys
.endm
.macro pgtbl, ttb0, ttb1, virt_to_phys
ldr \ttb1, =swapper_pg_dir
ldr \ttb0, =idmap_pg_dir
add \ttb1, \ttb1, \virt_to_phys
add \ttb0, \ttb0, \virt_to_phys
.endm
#ifdef CONFIG_ARM64_64K_PAGES
# define BLOCK_SHIFT PAGE_SHIFT
# define BLOCK_SIZE PAGE_SIZE
# define TABLE_SHIFT PMD_SHIFT
#else
# define BLOCK_SHIFT SECTION_SHIFT
# define BLOCK_SIZE SECTION_SIZE
# define TABLE_SHIFT PUD_SHIFT
#endif
#define KERNEL_START KERNEL_RAM_VADDR
#define KERNEL_END _end
/* ihk param offset */
#define TRAMPOLINE_DATA_RESERVED_SIZE 0x08
#define TRAMPOLINE_DATA_PGTBL_SIZE 0x08
#define TRAMPOLINE_DATA_LOAD_SIZE 0x08
#define TRAMPOLINE_DATA_STACK_SIZE 0x08
#define TRAMPOLINE_DATA_BOOT_PARAM_SIZE 0x08
#define TRAMPOLINE_DATA_STARTUP_DATA_SIZE 0x08
#define TRAMPOLINE_DATA_ST_PHYS_BASE_SIZE 0x08
#define TRAMPOLINE_DATA_ST_PHYS_SIZE_SIZE 0x08
#define TRAMPOLINE_DATA_GIC_DIST_PA_SIZE 0x08
#define TRAMPOLINE_DATA_GIC_DIST_MAP_SIZE_SIZE 0x08
#define TRAMPOLINE_DATA_GIC_CPU_PA_SIZE 0x08
#define TRAMPOLINE_DATA_GIC_CPU_MAP_SIZE_SIZE 0x08
#define TRAMPOLINE_DATA_GIC_PERCPU_OFF_SIZE 0x04
#define TRAMPOLINE_DATA_GIC_VERSION_SIZE 0x04
#define TRAMPOLINE_DATA_LPJ_SIZE 0x08
#define TRAMPOLINE_DATA_HZ_SIZE 0x08
#define TRAMPOLINE_DATA_PSCI_METHOD_SIZE 0x08
#define TRAMPOLINE_DATA_USE_VIRT_TIMER_SIZE 0x08
#define TRAMPOLINE_DATA_EVTSTRM_TIMER_RATE_SIZE 0x08
#define TRAMPOLINE_DATA_DEFAULT_VL_SIZE 0x08
#define TRAMPOLINE_DATA_CPU_MAP_SIZE_SIZE 0x08
#define TRAMPOLINE_DATA_CPU_MAP_SIZE (NR_CPUS * 8)
#define TRAMPOLINE_DATA_DATA_RDISTS_PA_SIZE (NR_CPUS * 8)
#define TRAMPOLINE_DATA_NR_PMU_AFFI_SIZE 0x04
#define TRAMPOLINE_DATA_PMU_AFF_SIZE (CONFIG_SMP_MAX_CORES * 4)
#define STARTUP_DATA_RESERVED 0x00
#define STARTUP_DATA_BASE 0x08
#define STARTUP_DATA_PGTBL 0x10
#define STARTUP_DATA_STACK 0x18
#define STARTUP_DATA_ARG2 0x20
#define STARTUP_DATA_TRAMPILINE 0x28
#define STARTUP_DATA_NEXT_PC 0x30
/* ihk param save area */
.globl ihk_param_head
.globl ihk_param_gic_dist_base_pa, ihk_param_gic_cpu_base_pa
.globl ihk_param_gic_dist_map_size, ihk_param_gic_cpu_map_size
.globl ihk_param_gic_percpu_offset, ihk_param_gic_version
.globl ihk_param_lpj, ihk_param_hz, ihk_param_psci_method
.globl ihk_param_cpu_logical_map, ihk_param_gic_rdist_base_pa
.globl ihk_param_pmu_irq_affiniry, ihk_param_nr_pmu_irq_affiniry
.globl ihk_param_use_virt_timer, ihk_param_evtstrm_timer_rate
.globl ihk_param_default_vl
ihk_param_head:
ihk_param_param_addr:
.quad 0
ihk_param_phys_addr:
.quad 0
ihk_param_st_phys_base:
.quad 0
ihk_param_st_phys_size:
.quad 0
ihk_param_gic_dist_base_pa:
.quad 0
ihk_param_gic_dist_map_size:
.quad 0
ihk_param_gic_cpu_base_pa:
.quad 0
ihk_param_gic_cpu_map_size:
.quad 0
ihk_param_gic_percpu_offset:
.word 0
ihk_param_gic_version:
.word 0
ihk_param_lpj:
.quad 0 /* udelay loops value */
ihk_param_hz:
.quad 0 /* host HZ value */
ihk_param_psci_method:
.quad 0 /* hvc or smc ? */
ihk_param_use_virt_timer:
.quad 0 /* virt timer or phys timer ? */
ihk_param_evtstrm_timer_rate:
.quad 0 /* event stream timer rate */
ihk_param_default_vl:
.quad 0 /* SVE default VL */
ihk_param_cpu_logical_map:
.skip NR_CPUS * 8 /* array of the MPIDR and the core number */
ihk_param_gic_rdist_base_pa:
.skip NR_CPUS * 8 /* per-cpu re-distributer PA */
ihk_param_pmu_irq_affiniry:
.skip CONFIG_SMP_MAX_CORES * 4 /* array of the pmu affinity list */
ihk_param_nr_pmu_irq_affiniry:
.word 0 /* number of pmu affinity list elements. */
/* @ref.impl arch/arm64/include/asm/kvm_arm.h */
#define HCR_E2H (UL(1) << 34)
#define HCR_RW_SHIFT 31
#define HCR_RW (UL(1) << HCR_RW_SHIFT)
#define HCR_TGE (UL(1) << 27)
/*
* end early head section, begin head code that is also used for
* hotplug and needs to have the same protections as the text region
*/
.section ".text","ax"
ENTRY(arch_start)
/* store ihk param */
/* x4 = ihk_smp_trampoline_data PA */
add x0, x4, #TRAMPOLINE_DATA_RESERVED_SIZE
/* header_pgtbl */
add x0, x0, #TRAMPOLINE_DATA_PGTBL_SIZE
/* header_load */
add x0, x0, #TRAMPOLINE_DATA_LOAD_SIZE
/* stack_ptr */
add x0, x0, #TRAMPOLINE_DATA_STACK_SIZE
/* notify_address */
ldr x16, [x0], #TRAMPOLINE_DATA_BOOT_PARAM_SIZE
adr x15, ihk_param_param_addr
str x16, [x15]
/* startup_data */
ldr x16, [x0], #TRAMPOLINE_DATA_STARTUP_DATA_SIZE
ldr x15, [x16, #STARTUP_DATA_ARG2]
adr x17, ihk_param_phys_addr
str x15, [x17]
/* st_phys_base */
ldr x16, [x0], #TRAMPOLINE_DATA_ST_PHYS_BASE_SIZE
adr x15, ihk_param_st_phys_base
str x16, [x15]
/* st_phys_size */
ldr x16, [x0], #TRAMPOLINE_DATA_ST_PHYS_SIZE_SIZE
adr x15, ihk_param_st_phys_size
str x16, [x15]
/* dist_base_pa */
ldr x16, [x0], #TRAMPOLINE_DATA_GIC_DIST_PA_SIZE
adr x15, ihk_param_gic_dist_base_pa
str x16, [x15]
/* dist_map_size */
ldr x16, [x0], #TRAMPOLINE_DATA_GIC_DIST_MAP_SIZE_SIZE
adr x15, ihk_param_gic_dist_map_size
str x16, [x15]
/* cpu_base_pa */
ldr x16, [x0], #TRAMPOLINE_DATA_GIC_CPU_PA_SIZE
adr x15, ihk_param_gic_cpu_base_pa
str x16, [x15]
/* cpu_map_size */
ldr x16, [x0], #TRAMPOLINE_DATA_GIC_CPU_MAP_SIZE_SIZE
adr x15, ihk_param_gic_cpu_map_size
str x16, [x15]
/* percpu_offset */
ldr w16, [x0], #TRAMPOLINE_DATA_GIC_PERCPU_OFF_SIZE
adr x15, ihk_param_gic_percpu_offset
str w16, [x15]
/* gic_version */
ldr w16, [x0], #TRAMPOLINE_DATA_GIC_VERSION_SIZE
adr x15, ihk_param_gic_version
str w16, [x15]
/* loops_per_jiffy */
ldr x16, [x0], #TRAMPOLINE_DATA_LPJ_SIZE
adr x15, ihk_param_lpj
str x16, [x15]
/* hz */
ldr x16, [x0], #TRAMPOLINE_DATA_HZ_SIZE
adr x15, ihk_param_hz
str x16, [x15]
/* psci_method */
ldr x16, [x0], #TRAMPOLINE_DATA_PSCI_METHOD_SIZE
adr x15, ihk_param_psci_method
str x16, [x15]
/* use_virt_timer */
ldr x16, [x0], #TRAMPOLINE_DATA_USE_VIRT_TIMER_SIZE
adr x15, ihk_param_use_virt_timer
str x16, [x15]
/* evtstrm_timer_rate */
ldr x16, [x0], #TRAMPOLINE_DATA_EVTSTRM_TIMER_RATE_SIZE
adr x15, ihk_param_evtstrm_timer_rate
str x16, [x15]
/* SVE default VL */
ldr x16, [x0], #TRAMPOLINE_DATA_DEFAULT_VL_SIZE
adr x15, ihk_param_default_vl
str x16, [x15]
/* cpu_logical_map_size */
ldr x16, [x0], #TRAMPOLINE_DATA_CPU_MAP_SIZE_SIZE
mov x1, x16
/* cpu_logical_map */
adr x15, ihk_param_cpu_logical_map
mov x18, x0
1: ldr x17, [x18], #8
str x17, [x15], #8
sub x16, x16, #1
cmp x16, #0
b.ne 1b
mov x16, #NR_CPUS /* calc next data */
lsl x16, x16, 3
add x0, x0, x16
/* reset cpu_logical_map_size */
mov x16, x1
/* gic_rdist_base_pa */
adr x15, ihk_param_gic_rdist_base_pa
mov x18, x0
1: ldr x17, [x18], #8
str x17, [x15], #8
sub x16, x16, #1
cmp x16, #0
b.ne 1b
mov x16, #NR_CPUS /* calc next data */
lsl x16, x16, 3
add x0, x0, x16
/* nr_pmu_irq_affiniry */
ldr w16, [x0], #TRAMPOLINE_DATA_NR_PMU_AFFI_SIZE
adr x15, ihk_param_nr_pmu_irq_affiniry
str w16, [x15]
/* pmu_irq_affiniry */
mov x18, x0
adr x15, ihk_param_pmu_irq_affiniry
b 2f
1: ldr w17, [x18], #4
str w17, [x15], #4
sub w16, w16, #1
2: cmp w16, #0
b.ne 1b
mov x16, #CONFIG_SMP_MAX_CORES /* calc next data */
lsl x16, x16, 2
add x0, x0, x16
/* */
bl __calc_phys_offset // x24=PHYS_OFFSET, x28=PHYS_OFFSET-KERNEL_START
bl __create_page_tables // x25=TTBR0, x26=TTBR1
b secondary_entry_common
ENDPROC(arch_start)
ENTRY(arch_ap_start)
bl __calc_phys_offset // x24=PHYS_OFFSET, x28=PHYS_OFFSET-KERNEL_START
b secondary_entry_common
ENDPROC(arch_ap_start)
/*
* Macro to create a table entry to the next page.
*
* tbl: page table address
* virt: virtual address
* shift: #imm page table shift
* ptrs: #imm pointers per table page
*
* Preserves: virt
* Corrupts: tmp1, tmp2
* Returns: tbl -> next level table page address
*/
.macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2
lsr \tmp1, \virt, #\shift
and \tmp1, \tmp1, #\ptrs - 1 // table index
add \tmp2, \tbl, #PAGE_SIZE
orr \tmp2, \tmp2, #PMD_TYPE_TABLE // address of next table and entry type
str \tmp2, [\tbl, \tmp1, lsl #3]
add \tbl, \tbl, #PAGE_SIZE // next level table page
.endm
/*
* Macro to populate the PGD (and possibily PUD) for the corresponding
* block entry in the next level (tbl) for the given virtual address.
*
* Preserves: tbl, next, virt
* Corrupts: tmp1, tmp2
*/
.macro create_pgd_entry, tbl, virt, tmp1, tmp2
create_table_entry \tbl, \virt, PGDIR_SHIFT, PTRS_PER_PGD, \tmp1, \tmp2
#if SWAPPER_PGTABLE_LEVELS == 3
create_table_entry \tbl, \virt, TABLE_SHIFT, PTRS_PER_PTE, \tmp1, \tmp2
#endif
.endm
/*
* Macro to populate block entries in the page table for the start..end
* virtual range (inclusive).
*
* Preserves: tbl, flags
* Corrupts: phys, start, end, pstate
*/
.macro create_block_map, tbl, flags, phys, start, end
lsr \phys, \phys, #BLOCK_SHIFT
lsr \start, \start, #BLOCK_SHIFT
and \start, \start, #PTRS_PER_PTE - 1 // table index
orr \phys, \flags, \phys, lsl #BLOCK_SHIFT // table entry
lsr \end, \end, #BLOCK_SHIFT
and \end, \end, #PTRS_PER_PTE - 1 // table end index
9999: str \phys, [\tbl, \start, lsl #3] // store the entry
add \start, \start, #1 // next entry
add \phys, \phys, #BLOCK_SIZE // next block
cmp \start, \end
b.ls 9999b
.endm
/*
* Setup the initial page tables. We only setup the barest amount which is
* required to get the kernel running. The following sections are required:
* - identity mapping to enable the MMU (low address, TTBR0)
* - first few MB of the kernel linear mapping to jump to once the MMU has
* been enabled, including the FDT blob (TTBR1)
* - pgd entry for fixed mappings (TTBR1)
*/
__create_page_tables:
pgtbl_init x25, x26, x28
pgtbl x25, x26, x28 // idmap_pg_dir and swapper_pg_dir addresses
mov x27, lr
/*
* Invalidate the idmap and swapper page tables to avoid potential
* dirty cache lines being evicted.
*/
mov x0, x25
add x1, x26, #SWAPPER_DIR_SIZE
bl __inval_cache_range
/*
* Clear the idmap and swapper page tables.
*/
mov x0, x25
add x6, x26, #SWAPPER_DIR_SIZE
1: stp xzr, xzr, [x0], #16
stp xzr, xzr, [x0], #16
stp xzr, xzr, [x0], #16
stp xzr, xzr, [x0], #16
cmp x0, x6
b.lo 1b
ldr x7, =MM_MMUFLAGS
/*
* Create the identity mapping.
*/
mov x0, x25 // idmap_pg_dir
ldr x3, =KERNEL_START
add x3, x3, x28 // __pa(KERNEL_START)
create_pgd_entry x0, x3, x5, x6
ldr x6, =KERNEL_END
mov x5, x3 // __pa(KERNEL_START)
add x6, x6, x28 // __pa(KERNEL_END)
create_block_map x0, x7, x3, x5, x6
/*
* Map the kernel image (starting with PHYS_OFFSET).
*/
mov x0, x26 // swapper_pg_dir
ldr x5, =KERNEL_START
create_pgd_entry x0, x5, x3, x6
ldr x6, =KERNEL_END
mov x3, x24 // phys offset
create_block_map x0, x7, x3, x5, x6
/*
* Map the early_alloc_pages area, kernel_img next block
*/
ldr x3, =KERNEL_END
add x3, x3, x28 // __pa(KERNEL_END)
add x3, x3, #BLOCK_SIZE
sub x3, x3, #1
bic x3, x3, #(BLOCK_SIZE - 1) // start PA calc.
ldr x5, =EARLY_ALLOC_VADDR // get start VA
mov x6, #1
lsl x6, x6, #(PAGE_SHIFT + MAP_EARLY_ALLOC_SHIFT)
add x6, x5, x6 // end VA calc
sub x6, x6, #1 // inclusive range
create_block_map x0, x7, x3, x5, x6
/*
* Map the boot_param area
*/
adr x3, ihk_param_param_addr
ldr x3, [x3] // get boot_param PA
ldr x5, =BOOT_PARAM_VADDR // get boot_param VA
mov x6, #1
lsl x6, x6, #MAP_BOOT_PARAM_SHIFT
add x6, x5, x6 // end VA calc
sub x6, x6, #1 // inclusive range
create_block_map x0, x7, x3, x5, x6
/*
* Map the FDT blob (maximum 2MB; must be within 512MB of
* PHYS_OFFSET).
*/
/* FDT disable for McKernel */
// mov x3, x21 // FDT phys address
// and x3, x3, #~((1 << 21) - 1) // 2MB aligned
// mov x6, #PAGE_OFFSET
// sub x5, x3, x24 // subtract PHYS_OFFSET
// tst x5, #~((1 << 29) - 1) // within 512MB?
// csel x21, xzr, x21, ne // zero the FDT pointer
// b.ne 1f
// add x5, x5, x6 // __va(FDT blob)
// add x6, x5, #1 << 21 // 2MB for the FDT blob
// sub x6, x6, #1 // inclusive range
// create_block_map x0, x7, x3, x5, x6
1:
/*
* Since the page tables have been populated with non-cacheable
* accesses (MMU disabled), invalidate the idmap and swapper page
* tables again to remove any speculatively loaded cache lines.
*/
mov x0, x25
add x1, x26, #SWAPPER_DIR_SIZE
bl __inval_cache_range
mov lr, x27
ret
ENDPROC(__create_page_tables)
.ltorg
/*
* If we're fortunate enough to boot at EL2, ensure that the world is
* sane before dropping to EL1.
*
* Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in x20 if
* booted in EL1 or EL2 respectively.
*/
ENTRY(el2_setup)
mrs x0, CurrentEL
cmp x0, #CurrentEL_EL2
b.ne 1f
mrs x0, sctlr_el2
CPU_BE( orr x0, x0, #(1 << 25) ) // Set the EE bit for EL2
CPU_LE( bic x0, x0, #(1 << 25) ) // Clear the EE bit for EL2
msr sctlr_el2, x0
b 2f
1: mrs x0, sctlr_el1
CPU_BE( orr x0, x0, #(3 << 24) ) // Set the EE and E0E bits for EL1
CPU_LE( bic x0, x0, #(3 << 24) ) // Clear the EE and E0E bits for EL1
msr sctlr_el1, x0
mov w20, #BOOT_CPU_MODE_EL1 // This cpu booted in EL1
isb
ret
2:
#ifdef CONFIG_ARM64_VHE
/*
* Check for VHE being present. For the rest of the EL2 setup,
* x2 being non-zero indicates that we do have VHE, and that the
* kernel is intended to run at EL2.
*/
mrs x2, id_aa64mmfr1_el1
ubfx x2, x2, #8, #4
#else /* CONFIG_ARM64_VHE */
mov x2, xzr
#endif /* CONFIG_ARM64_VHE */
/* Hyp configuration. */
mov x0, #HCR_RW // 64-bit EL1
cbz x2, set_hcr
orr x0, x0, #HCR_TGE // Enable Host Extensions
orr x0, x0, #HCR_E2H
set_hcr:
msr hcr_el2, x0
isb
/* Generic timers. */
mrs x0, cnthctl_el2
orr x0, x0, #3 // Enable EL1 physical timers
msr cnthctl_el2, x0
msr cntvoff_el2, xzr // Clear virtual offset
#ifdef CONFIG_ARM_GIC_V3
/* GICv3 system register access */
mrs x0, id_aa64pfr0_el1
ubfx x0, x0, #24, #4
cmp x0, #1
b.ne 3f
mrs_s x0, ICC_SRE_EL2
orr x0, x0, #ICC_SRE_EL2_SRE // Set ICC_SRE_EL2.SRE==1
orr x0, x0, #ICC_SRE_EL2_ENABLE // Set ICC_SRE_EL2.Enable==1
msr_s ICC_SRE_EL2, x0
isb // Make sure SRE is now set
msr_s ICH_HCR_EL2, xzr // Reset ICC_HCR_EL2 to defaults
3:
#endif
/* Populate ID registers. */
mrs x0, midr_el1
mrs x1, mpidr_el1
msr vpidr_el2, x0
msr vmpidr_el2, x1
/*
* When VHE is not in use, early init of EL2 and EL1 needs to be
* done here.
* When VHE _is_ in use, EL1 will not be used in the host and
* requires no configuration, and all non-hyp-specific EL2 setup
* will be done via the _EL1 system register aliases in __cpu_setup.
*/
cbnz x2, 1f
/* sctlr_el1 */
mov x0, #0x0800 // Set/clear RES{1,0} bits
CPU_BE( movk x0, #0x33d0, lsl #16 ) // Set EE and E0E on BE systems
CPU_LE( movk x0, #0x30d0, lsl #16 ) // Clear EE and E0E on LE systems
msr sctlr_el1, x0
/* Coprocessor traps. */
mov x0, #0x33ff
/* SVE register access */
mrs x1, id_aa64pfr0_el1
ubfx x1, x1, #ID_AA64PFR0_SVE_SHIFT, #4
cbz x1, 4f
bic x0, x0, #CPTR_EL2_TZ // Disable SVE traps to EL2
msr cptr_el2, x0 // Disable copro. traps to EL2
isb
mov x1, #ZCR_EL1_LEN_MASK // SVE: Enable full vector
msr_s SYS_ZCR_EL1, x1 // length for EL1.
b 1f
4: msr cptr_el2, x0 // Disable copro. traps to EL2
1:
#ifdef CONFIG_COMPAT
msr hstr_el2, xzr // Disable CP15 traps to EL2
#endif
/* Stage-2 translation */
msr vttbr_el2, xzr
cbz x2, install_el2_stub
mov w20, #BOOT_CPU_MODE_EL2 // This CPU booted in EL2
isb
ret
install_el2_stub:
/* Hypervisor stub */
adrp x0, __hyp_stub_vectors
add x0, x0, #:lo12:__hyp_stub_vectors
msr vbar_el2, x0
/* spsr */
mov x0, #(PSR_F_BIT | PSR_I_BIT | PSR_A_BIT | PSR_D_BIT |\
PSR_MODE_EL1h)
msr spsr_el2, x0
msr elr_el2, lr
mov w20, #BOOT_CPU_MODE_EL2 // This CPU booted in EL2
eret
ENDPROC(el2_setup)
/*
* Sets the __boot_cpu_mode flag depending on the CPU boot mode passed
* in x20. See arch/arm64/include/asm/virt.h for more info.
*/
ENTRY(set_cpu_boot_mode_flag)
ldr x1, =__boot_cpu_mode // Compute __boot_cpu_mode
add x1, x1, x28
cmp w20, #BOOT_CPU_MODE_EL2
b.ne 1f
add x1, x1, #4
1: str w20, [x1] // This CPU has booted in EL1
dmb sy
dc ivac, x1 // Invalidate potentially stale cache line
ret
ENDPROC(set_cpu_boot_mode_flag)
#if defined(CONFIG_HAS_NMI)
/*
* void maybe_switch_to_sysreg_gic_cpuif(void)
*
* Enable interrupt controller system register access if this feature
* has been detected by the alternatives system.
*
* Before we jump into generic code we must enable interrupt controller system
* register access because this is required by the irqflags macros. We must
* also mask interrupts at the PMR and unmask them within the PSR. That leaves
* us set up and ready for the kernel to make its first call to
* arch_local_irq_enable().
*
*/
ENTRY(maybe_switch_to_sysreg_gic_cpuif)
mrs_s x0, ICC_SRE_EL1
orr x0, x0, #1
msr_s ICC_SRE_EL1, x0 // Set ICC_SRE_EL1.SRE==1
isb // Make sure SRE is now set
mov x0, ICC_PMR_EL1_MASKED
msr_s ICC_PMR_EL1, x0 // Prepare for unmask of I bit
msr daifclr, #2 // Clear the I bit
ret
ENDPROC(maybe_switch_to_sysreg_gic_cpuif)
#else
ENTRY(maybe_switch_to_sysreg_gic_cpuif)
ret
ENDPROC(maybe_switch_to_sysreg_gic_cpuif)
#endif /* defined(CONFIG_HAS_NMI) */
/*
* We need to find out the CPU boot mode long after boot, so we need to
* store it in a writable variable.
*
* This is not in .bss, because we set it sufficiently early that the boot-time
* zeroing of .bss would clobber it.
*/
.pushsection .data..cacheline_aligned
ENTRY(__boot_cpu_mode)
.align L1_CACHE_SHIFT
.long BOOT_CPU_MODE_EL2
.long 0
.popsection
ENTRY(secondary_entry_common)
bl el2_setup // Drop to EL1
bl set_cpu_boot_mode_flag
b secondary_startup
ENDPROC(secondary_entry_common)
ENTRY(secondary_startup)
/*
* Common entry point for secondary CPUs.
*/
mrs x22, midr_el1 // x22=cpuid
mov x0, x22
bl lookup_processor_type
mov x23, x0 // x23=current cpu_table
cbz x23, __error_p // invalid processor (x23=0)?
pgtbl x25, x26, x28 // x25=TTBR0, x26=TTBR1
ldr x12, [x23, #CPU_INFO_SETUP]
add x12, x12, x28 // __virt_to_phys
blr x12 // initialise processor
ldr x21, =secondary_data
ldr x27, =__secondary_switched // address to jump to after enabling the MMU
b __enable_mmu
ENDPROC(secondary_startup)
ENTRY(__secondary_switched)
ldr x0, [x21, #SECONDARY_DATA_STACK] // get secondary_data.stack
mov sp, x0
/*
* Conditionally switch to GIC PMR for interrupt masking (this
* will be a nop if we are using normal interrupt masking)
*/
bl maybe_switch_to_sysreg_gic_cpuif
mov x29, #0
adr x1, secondary_data
ldr x0, [x1, #SECONDARY_DATA_ARG] // get secondary_data.arg
ldr x27, [x1, #SECONDARY_DATA_NEXT_PC] // get secondary_data.next_pc
br x27 // secondary_data.next_pc(secondary_data.arg);
ENDPROC(__secondary_switched)
/*
* Setup common bits before finally enabling the MMU. Essentially this is just
* loading the page table pointer and vector base registers.
*
* On entry to this code, x0 must contain the SCTLR_EL1 value for turning on
* the MMU.
*/
__enable_mmu:
ldr x5, =vectors
msr vbar_el1, x5
msr ttbr0_el1, x25 // load TTBR0
msr ttbr1_el1, x26 // load TTBR1
isb
b __turn_mmu_on
ENDPROC(__enable_mmu)
/*
* Enable the MMU. This completely changes the structure of the visible memory
* space. You will not be able to trace execution through this.
*
* x0 = system control register
* x27 = *virtual* address to jump to upon completion
*
* other registers depend on the function called upon completion
*
* We align the entire function to the smallest power of two larger than it to
* ensure it fits within a single block map entry. Otherwise were PHYS_OFFSET
* close to the end of a 512MB or 1GB block we might require an additional
* table to map the entire function.
*/
.align 4
__turn_mmu_on:
msr sctlr_el1, x0
isb
br x27
ENDPROC(__turn_mmu_on)
/*
* Calculate the start of physical memory.
*/
__calc_phys_offset:
adr x0, 1f
ldp x1, x2, [x0]
sub x28, x0, x1 // x28 = PHYS_OFFSET - KERNEL_START
add x24, x2, x28 // x24 = PHYS_OFFSET
ret
ENDPROC(__calc_phys_offset)
.align 3
1: .quad .
.quad KERNEL_START
/*
* Exception handling. Something went wrong and we can't proceed. We ought to
* tell the user, but since we don't have any guarantee that we're even
* running on the right architecture, we do virtually nothing.
*/
__error_p:
ENDPROC(__error_p)
__error:
1: nop
b 1b
ENDPROC(__error)
/*
* This function gets the processor ID in w0 and searches the cpu_table[] for
* a match. It returns a pointer to the struct cpu_info it found. The
* cpu_table[] must end with an empty (all zeros) structure.
*
* This routine can be called via C code and it needs to work with the MMU
* both disabled and enabled (the offset is calculated automatically).
*/
ENTRY(lookup_processor_type)
adr x1, __lookup_processor_type_data
ldp x2, x3, [x1]
sub x1, x1, x2 // get offset between VA and PA
add x3, x3, x1 // convert VA to PA
1:
ldp w5, w6, [x3] // load cpu_id_val and cpu_id_mask
cbz w5, 2f // end of list?
and w6, w6, w0
cmp w5, w6
b.eq 3f
add x3, x3, #CPU_INFO_SZ
b 1b
2:
mov x3, #0 // unknown processor
3:
mov x0, x3
ret
ENDPROC(lookup_processor_type)
.align 3
.type __lookup_processor_type_data, %object
__lookup_processor_type_data:
.quad .
.quad cpu_table
.size __lookup_processor_type_data, . - __lookup_processor_type_data

View File

@ -0,0 +1,409 @@
/* hw_breakpoint.c COPYRIGHT FUJITSU LIMITED 2016 */
#include <ihk/debug.h>
#include <cputype.h>
#include <errno.h>
#include <elfcore.h>
#include <ptrace.h>
#include <hw_breakpoint.h>
#include <arch-memory.h>
#include <signal.h>
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::core_num_[brps|wrps] */
/* Number of BRP/WRP registers on this CPU. */
int core_num_brps;
int core_num_wrps;
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::get_num_brps */
/* Determine number of BRP registers available. */
int get_num_brps(void)
{
return ((read_cpuid(ID_AA64DFR0_EL1) >> 12) & 0xf) + 1;
}
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::get_num_wrps */
/* Determine number of WRP registers available. */
int get_num_wrps(void)
{
return ((read_cpuid(ID_AA64DFR0_EL1) >> 20) & 0xf) + 1;
}
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::hw_breakpoint_slots */
int hw_breakpoint_slots(int type)
{
/*
* We can be called early, so don't rely on
* our static variables being initialised.
*/
switch (type) {
case TYPE_INST:
return get_num_brps();
case TYPE_DATA:
return get_num_wrps();
default:
kprintf("unknown slot type: %d\n", type);
return 0;
}
}
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::READ_WB_REG_CASE */
#define READ_WB_REG_CASE(OFF, N, REG, VAL) \
case (OFF + N): \
AARCH64_DBG_READ(N, REG, VAL); \
break
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::READ_WB_REG_CASE */
#define WRITE_WB_REG_CASE(OFF, N, REG, VAL) \
case (OFF + N): \
AARCH64_DBG_WRITE(N, REG, VAL); \
break
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::GEN_READ_WB_REG_CASES */
#define GEN_READ_WB_REG_CASES(OFF, REG, VAL) \
READ_WB_REG_CASE(OFF, 0, REG, VAL); \
READ_WB_REG_CASE(OFF, 1, REG, VAL); \
READ_WB_REG_CASE(OFF, 2, REG, VAL); \
READ_WB_REG_CASE(OFF, 3, REG, VAL); \
READ_WB_REG_CASE(OFF, 4, REG, VAL); \
READ_WB_REG_CASE(OFF, 5, REG, VAL); \
READ_WB_REG_CASE(OFF, 6, REG, VAL); \
READ_WB_REG_CASE(OFF, 7, REG, VAL); \
READ_WB_REG_CASE(OFF, 8, REG, VAL); \
READ_WB_REG_CASE(OFF, 9, REG, VAL); \
READ_WB_REG_CASE(OFF, 10, REG, VAL); \
READ_WB_REG_CASE(OFF, 11, REG, VAL); \
READ_WB_REG_CASE(OFF, 12, REG, VAL); \
READ_WB_REG_CASE(OFF, 13, REG, VAL); \
READ_WB_REG_CASE(OFF, 14, REG, VAL); \
READ_WB_REG_CASE(OFF, 15, REG, VAL)
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::GEN_WRITE_WB_REG_CASES */
#define GEN_WRITE_WB_REG_CASES(OFF, REG, VAL) \
WRITE_WB_REG_CASE(OFF, 0, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 1, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 2, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 3, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 4, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 5, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 6, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 7, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 8, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 9, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 10, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 11, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 12, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 13, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 14, REG, VAL); \
WRITE_WB_REG_CASE(OFF, 15, REG, VAL)
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::read_wb_reg */
unsigned long read_wb_reg(int reg, int n)
{
unsigned long val = 0;
switch (reg + n) {
GEN_READ_WB_REG_CASES(AARCH64_DBG_REG_BVR, AARCH64_DBG_REG_NAME_BVR, val);
GEN_READ_WB_REG_CASES(AARCH64_DBG_REG_BCR, AARCH64_DBG_REG_NAME_BCR, val);
GEN_READ_WB_REG_CASES(AARCH64_DBG_REG_WVR, AARCH64_DBG_REG_NAME_WVR, val);
GEN_READ_WB_REG_CASES(AARCH64_DBG_REG_WCR, AARCH64_DBG_REG_NAME_WCR, val);
default:
kprintf("attempt to read from unknown breakpoint register %d\n", n);
}
return val;
}
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::write_wb_reg */
void write_wb_reg(int reg, int n, unsigned long val)
{
switch (reg + n) {
GEN_WRITE_WB_REG_CASES(AARCH64_DBG_REG_BVR, AARCH64_DBG_REG_NAME_BVR, val);
GEN_WRITE_WB_REG_CASES(AARCH64_DBG_REG_BCR, AARCH64_DBG_REG_NAME_BCR, val);
GEN_WRITE_WB_REG_CASES(AARCH64_DBG_REG_WVR, AARCH64_DBG_REG_NAME_WVR, val);
GEN_WRITE_WB_REG_CASES(AARCH64_DBG_REG_WCR, AARCH64_DBG_REG_NAME_WCR, val);
default:
kprintf("attempt to write to unknown breakpoint register %d\n", n);
}
isb();
}
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::hw_breakpoint_reset */
void hw_breakpoint_reset(void)
{
int i = 0;
/* clear DBGBVR<n>_EL1 and DBGBCR<n>_EL1 (n=0-(core_num_brps-1)) */
for (i = 0; i < core_num_brps; i++) {
write_wb_reg(AARCH64_DBG_REG_BVR, i, 0UL);
write_wb_reg(AARCH64_DBG_REG_BCR, i, 0UL);
}
/* clear DBGWVR<n>_EL1 and DBGWCR<n>_EL1 (n=0-(core_num_wrps-1)) */
for (i = 0; i < core_num_wrps; i++) {
write_wb_reg(AARCH64_DBG_REG_WVR, i, 0UL);
write_wb_reg(AARCH64_DBG_REG_WCR, i, 0UL);
}
}
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::arch_hw_breakpoint_init */
void arch_hw_breakpoint_init(void)
{
struct user_hwdebug_state hws;
int max_hws_dbg_regs = sizeof(hws.dbg_regs) / sizeof(hws.dbg_regs[0]);
core_num_brps = get_num_brps();
core_num_wrps = get_num_wrps();
if (max_hws_dbg_regs < core_num_brps) {
kprintf("debugreg struct size is less than Determine number of BRP registers available.\n");
core_num_brps = max_hws_dbg_regs;
}
if (max_hws_dbg_regs < core_num_wrps) {
kprintf("debugreg struct size is less than Determine number of WRP registers available.\n");
core_num_wrps = max_hws_dbg_regs;
}
hw_breakpoint_reset();
}
struct arch_hw_breakpoint_ctrl {
unsigned int __reserved : 19,
len : 8,
type : 2,
privilege : 2,
enabled : 1;
};
static inline unsigned int encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
{
return (ctrl.len << 5) | (ctrl.type << 3) | (ctrl.privilege << 1) |
ctrl.enabled;
}
static inline void decode_ctrl_reg(unsigned int reg, struct arch_hw_breakpoint_ctrl *ctrl)
{
ctrl->enabled = reg & 0x1;
reg >>= 1;
ctrl->privilege = reg & 0x3;
reg >>= 2;
ctrl->type = reg & 0x3;
reg >>= 2;
ctrl->len = reg & 0xff;
}
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::arch_bp_generic_fields */
/*
* Extract generic type and length encodings from an arch_hw_breakpoint_ctrl.
* Hopefully this will disappear when ptrace can bypass the conversion
* to generic breakpoint descriptions.
*/
int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl,
int *gen_len, int *gen_type)
{
/* Type */
switch (ctrl.type) {
case ARM_BREAKPOINT_EXECUTE:
*gen_type = HW_BREAKPOINT_X;
break;
case ARM_BREAKPOINT_LOAD:
*gen_type = HW_BREAKPOINT_R;
break;
case ARM_BREAKPOINT_STORE:
*gen_type = HW_BREAKPOINT_W;
break;
case ARM_BREAKPOINT_LOAD | ARM_BREAKPOINT_STORE:
*gen_type = HW_BREAKPOINT_RW;
break;
default:
return -EINVAL;
}
/* Len */
switch (ctrl.len) {
case ARM_BREAKPOINT_LEN_1:
*gen_len = HW_BREAKPOINT_LEN_1;
break;
case ARM_BREAKPOINT_LEN_2:
*gen_len = HW_BREAKPOINT_LEN_2;
break;
case ARM_BREAKPOINT_LEN_4:
*gen_len = HW_BREAKPOINT_LEN_4;
break;
case ARM_BREAKPOINT_LEN_8:
*gen_len = HW_BREAKPOINT_LEN_8;
break;
default:
return -EINVAL;
}
return 0;
}
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::arch_check_bp_in_kernelspace */
/*
* Check whether bp virtual address is in kernel space.
*/
int arch_check_bp_in_kernelspace(unsigned long addr, unsigned int len)
{
return (addr >= USER_END) && ((addr + len - 1) >= USER_END);
}
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::arch_validate_hwbkpt_settings */
int arch_validate_hwbkpt_settings(long note_type, struct user_hwdebug_state *hws, size_t len)
{
int i;
unsigned long alignment_mask;
size_t cpysize, cpynum;
switch(note_type) {
case NT_ARM_HW_BREAK: /* breakpoint */
alignment_mask = 0x3;
break;
case NT_ARM_HW_WATCH: /* watchpoint */
alignment_mask = 0x7;
break;
default:
return -EINVAL;
}
cpysize = len - offsetof(struct user_hwdebug_state, dbg_regs[0]);
cpynum = cpysize / sizeof(hws->dbg_regs[0]);
for (i = 0; i < cpynum; i++) {
unsigned long addr = hws->dbg_regs[i].addr;
unsigned int uctrl = hws->dbg_regs[i].ctrl;
struct arch_hw_breakpoint_ctrl ctrl;
int err, len, type;
/* empty dbg_regs check skip */
if (addr == 0 && uctrl == 0) {
continue;
}
/* check address alignment */
if (addr & alignment_mask) {
return -EINVAL;
}
/* decode control bit */
decode_ctrl_reg(uctrl, &ctrl);
/* disabled, continue */
if (!ctrl.enabled) {
continue;
}
err = arch_bp_generic_fields(ctrl, &len, &type);
if (err) {
return err;
}
/* type check */
switch (note_type) {
case NT_ARM_HW_BREAK: /* breakpoint */
if ((type & HW_BREAKPOINT_X) != type) {
return -EINVAL;
}
break;
case NT_ARM_HW_WATCH: /* watchpoint */
if ((type & HW_BREAKPOINT_RW) != type) {
return -EINVAL;
}
break;
default:
return -EINVAL;
}
/* privilege generate */
if (arch_check_bp_in_kernelspace(addr, len)) {
/* kernel space breakpoint unsupported. */
return -EINVAL;
} else {
ctrl.privilege = AARCH64_BREAKPOINT_EL0;
}
/* ctrl check OK. */
hws->dbg_regs[i].ctrl = encode_ctrl_reg(ctrl);
}
return 0;
}
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::breakpoint_handler */
/*
* Debug exception handlers.
*/
int breakpoint_handler(unsigned long unused, unsigned int esr, struct pt_regs *regs)
{
int i = 0;
unsigned long val;
unsigned int ctrl_reg;
struct arch_hw_breakpoint_ctrl ctrl;
siginfo_t info;
for (i = 0; i < core_num_brps; i++) {
/* Check if the breakpoint value matches. */
val = read_wb_reg(AARCH64_DBG_REG_BVR, i);
if (val != (regs->pc & ~0x3)) {
continue;
}
/* Possible match, check the byte address select to confirm. */
ctrl_reg = read_wb_reg(AARCH64_DBG_REG_BCR, i);
decode_ctrl_reg(ctrl_reg, &ctrl);
if (!((1 << (regs->pc & 0x3)) & ctrl.len)) {
continue;
}
/* send SIGTRAP */
info.si_signo = SIGTRAP;
info.si_errno = 0;
info.si_code = TRAP_HWBKPT;
info._sifields._sigfault.si_addr = (void *)regs->pc;
set_signal(SIGTRAP, regs, &info);
}
return 0;
}
/* @ref.impl arch/arm64/kernel/hw_breakpoint.c::watchpoint_handler */
int watchpoint_handler(unsigned long addr, unsigned int esr, struct pt_regs *regs)
{
int i = 0;
int access;
unsigned long val;
unsigned int ctrl_reg;
struct arch_hw_breakpoint_ctrl ctrl;
siginfo_t info;
for (i = 0; i < core_num_wrps; i++) {
/* Check if the watchpoint value matches. */
val = read_wb_reg(AARCH64_DBG_REG_WVR, i);
if (val != (addr & ~0x7)) {
continue;
}
/* Possible match, check the byte address select to confirm. */
ctrl_reg = read_wb_reg(AARCH64_DBG_REG_WCR, i);
decode_ctrl_reg(ctrl_reg, &ctrl);
if (!((1 << (addr & 0x7)) & ctrl.len)) {
continue;
}
/*
* Check that the access type matches.
* 0 => load, otherwise => store
*/
access = (esr & AARCH64_ESR_ACCESS_MASK) ? ARM_BREAKPOINT_STORE :
ARM_BREAKPOINT_LOAD;
if (!(access & ctrl.type)) {
continue;
}
/* send SIGTRAP */
info.si_signo = SIGTRAP;
info.si_errno = 0;
info.si_code = TRAP_HWBKPT;
info._sifields._sigfault.si_addr = (void *)addr;
set_signal(SIGTRAP, regs, &info);
}
return 0;
}

View File

@ -0,0 +1,58 @@
/* hyp-stub.S COPYRIGHT FUJITSU LIMITED 2015 */
#include <linkage.h>
#include <assembler.h>
.text
.align 11
ENTRY(__hyp_stub_vectors)
ventry el2_sync_invalid // Synchronous EL2t
ventry el2_irq_invalid // IRQ EL2t
ventry el2_fiq_invalid // FIQ EL2t
ventry el2_error_invalid // Error EL2t
ventry el2_sync_invalid // Synchronous EL2h
ventry el2_irq_invalid // IRQ EL2h
ventry el2_fiq_invalid // FIQ EL2h
ventry el2_error_invalid // Error EL2h
ventry el1_sync // Synchronous 64-bit EL1
ventry el1_irq_invalid // IRQ 64-bit EL1
ventry el1_fiq_invalid // FIQ 64-bit EL1
ventry el1_error_invalid // Error 64-bit EL1
ventry el1_sync_invalid // Synchronous 32-bit EL1
ventry el1_irq_invalid // IRQ 32-bit EL1
ventry el1_fiq_invalid // FIQ 32-bit EL1
ventry el1_error_invalid // Error 32-bit EL1
ENDPROC(__hyp_stub_vectors)
.align 11
el1_sync:
mrs x1, esr_el2
lsr x1, x1, #26
cmp x1, #0x16
b.ne 2f // Not an HVC trap
cbz x0, 1f
msr vbar_el2, x0 // Set vbar_el2
b 2f
1: mrs x0, vbar_el2 // Return vbar_el2
2: eret
ENDPROC(el1_sync)
.macro invalid_vector label
\label:
b \label
ENDPROC(\label)
.endm
invalid_vector el2_sync_invalid
invalid_vector el2_irq_invalid
invalid_vector el2_fiq_invalid
invalid_vector el2_error_invalid
invalid_vector el1_sync_invalid
invalid_vector el1_irq_invalid
invalid_vector el1_fiq_invalid
invalid_vector el1_error_invalid

View File

@ -0,0 +1,19 @@
/* arch-bitops.h COPYRIGHT FUJITSU LIMITED 2015-2016 */
#ifndef __HEADER_ARM64_COMMON_BITOPS_H
#define __HEADER_ARM64_COMMON_BITOPS_H
#ifndef INCLUDE_BITOPS_H
# error only <bitops.h> can be included directly
#endif
#ifndef __ASSEMBLY__
#include "bitops-fls.h"
#include "bitops-__ffs.h"
#include "bitops-ffz.h"
#include "bitops-set_bit.h"
#include "bitops-clear_bit.h"
#endif /*__ASSEMBLY__*/
#endif /* !__HEADER_ARM64_COMMON_BITOPS_H */

View File

@ -0,0 +1,145 @@
/* arch-futex.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef __HEADER_ARM64_COMMON_ARCH_FUTEX_H
#define __HEADER_ARM64_COMMON_ARCH_FUTEX_H
/*
* @ref.impl
* linux-linaro/arch/arm64/include/asm/futex.h:__futex_atomic_op
*/
#define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg) \
asm volatile( \
"1: ldxr %w1, %2\n" \
insn "\n" \
"2: stlxr %w3, %w0, %2\n" \
" cbnz %w3, 1b\n" \
" dmb ish\n" \
"3:\n" \
" .pushsection .fixup,\"ax\"\n" \
" .align 2\n" \
"4: mov %w0, %w5\n" \
" b 3b\n" \
" .popsection\n" \
" .pushsection __ex_table,\"a\"\n" \
" .align 3\n" \
" .quad 1b, 4b, 2b, 4b\n" \
" .popsection\n" \
: "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp) \
: "r" (oparg), "Ir" (-EFAULT) \
: "memory")
/*
* @ref.impl
* linux-linaro/arch/arm64/include/asm/futex.h:futex_atomic_op_inuser
*/
static inline int futex_atomic_op_inuser(int encoded_op, int __user *uaddr)
{
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
int oparg = (encoded_op << 8) >> 20;
int cmparg = (encoded_op << 20) >> 20;
int oldval = 0, ret, tmp;
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
#ifdef __UACCESS__
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
return -EFAULT;
#endif
// pagefault_disable(); /* implies preempt_disable() */
switch (op) {
case FUTEX_OP_SET:
__futex_atomic_op("mov %w0, %w4",
ret, oldval, uaddr, tmp, oparg);
break;
case FUTEX_OP_ADD:
__futex_atomic_op("add %w0, %w1, %w4",
ret, oldval, uaddr, tmp, oparg);
break;
case FUTEX_OP_OR:
__futex_atomic_op("orr %w0, %w1, %w4",
ret, oldval, uaddr, tmp, oparg);
break;
case FUTEX_OP_ANDN:
__futex_atomic_op("and %w0, %w1, %w4",
ret, oldval, uaddr, tmp, ~oparg);
break;
case FUTEX_OP_XOR:
__futex_atomic_op("eor %w0, %w1, %w4",
ret, oldval, uaddr, tmp, oparg);
break;
default:
ret = -ENOSYS;
}
// pagefault_enable(); /* subsumes preempt_enable() */
if (!ret) {
switch (cmp) {
case FUTEX_OP_CMP_EQ: ret = (oldval == cmparg); break;
case FUTEX_OP_CMP_NE: ret = (oldval != cmparg); break;
case FUTEX_OP_CMP_LT: ret = (oldval < cmparg); break;
case FUTEX_OP_CMP_GE: ret = (oldval >= cmparg); break;
case FUTEX_OP_CMP_LE: ret = (oldval <= cmparg); break;
case FUTEX_OP_CMP_GT: ret = (oldval > cmparg); break;
default: ret = -ENOSYS;
}
}
return ret;
}
/*
* @ref.impl
* linux-linaro/arch/arm64/include/asm/futex.h:futex_atomic_cmpxchg_inatomic
* mckernel/kernel/include/futex.h:futex_atomic_cmpxchg_inatomic (x86 depend)
*/
static inline int
futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
{
int ret = 0;
int val, tmp;
if(uaddr == NULL) {
return -EFAULT;
}
#ifdef __UACCESS__
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int))) {
return -EFAULT;
}
#endif
asm volatile("// futex_atomic_cmpxchg_inatomic\n"
"1: ldxr %w1, %2\n"
" sub %w3, %w1, %w4\n"
" cbnz %w3, 3f\n"
"2: stlxr %w3, %w5, %2\n"
" cbnz %w3, 1b\n"
" dmb ish\n"
"3:\n"
" .pushsection .fixup,\"ax\"\n"
"4: mov %w0, %w6\n"
" b 3b\n"
" .popsection\n"
" .pushsection __ex_table,\"a\"\n"
" .align 3\n"
" .quad 1b, 4b, 2b, 4b\n"
" .popsection\n"
: "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp)
: "r" (oldval), "r" (newval), "Ir" (-EFAULT)
: "memory");
return ret;
}
static inline int get_futex_value_locked(uint32_t *dest, uint32_t *from)
{
*dest = *(volatile uint32_t *)from;
return 0;
}
#endif /* !__HEADER_ARM64_COMMON_ARCH_FUTEX_H */

View File

@ -0,0 +1,605 @@
/* arch-lock.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
#ifndef __HEADER_ARM64_COMMON_ARCH_LOCK_H
#define __HEADER_ARM64_COMMON_ARCH_LOCK_H
#define IHK_STATIC_SPINLOCK_FUNCS
#include <ihk/cpu.h>
#include <ihk/atomic.h>
//#define DEBUG_SPINLOCK
//#define DEBUG_MCS_RWLOCK
#if defined(DEBUG_SPINLOCK) || defined(DEBUG_MCS_RWLOCK)
int __kprintf(const char *format, ...);
#endif
/* @ref.impl arch/arm64/include/asm/spinlock_types.h::TICKET_SHIFT */
#define TICKET_SHIFT 16
/* @ref.impl arch/arm64/include/asm/spinlock_types.h::arch_spinlock_t */
typedef struct {
//#ifdef __AARCH64EB__
// uint16_t next;
// uint16_t owner;
//#else /* __AARCH64EB__ */
uint16_t owner;
uint16_t next;
//#endif /* __AARCH64EB__ */
} ihk_spinlock_t;
extern void preempt_enable(void);
extern void preempt_disable(void);
/* @ref.impl arch/arm64/include/asm/spinlock_types.h::__ARCH_SPIN_LOCK_UNLOCKED */
#define SPIN_LOCK_UNLOCKED { 0, 0 }
/* initialized spinlock struct */
static void ihk_mc_spinlock_init(ihk_spinlock_t *lock)
{
*lock = (ihk_spinlock_t)SPIN_LOCK_UNLOCKED;
}
/* @ref.impl arch/arm64/include/asm/spinlock.h::arch_spin_lock */
/* spinlock lock */
#ifdef DEBUG_SPINLOCK
#define ihk_mc_spinlock_lock_noirq(l) { \
__kprintf("[%d] call ihk_mc_spinlock_lock_noirq %p %s:%d\n", ihk_mc_get_processor_id(), (l), __FILE__, __LINE__); \
__ihk_mc_spinlock_lock_noirq(l); \
__kprintf("[%d] ret ihk_mc_spinlock_lock_noirq\n", ihk_mc_get_processor_id()); \
}
#else
#define ihk_mc_spinlock_lock_noirq __ihk_mc_spinlock_lock_noirq
#endif
static void __ihk_mc_spinlock_lock_noirq(ihk_spinlock_t *lock)
{
unsigned int tmp;
ihk_spinlock_t lockval, newval;
preempt_disable();
asm volatile(
/* Atomically increment the next ticket. */
" prfm pstl1strm, %3\n"
"1: ldaxr %w0, %3\n"
" add %w1, %w0, %w5\n"
" stxr %w2, %w1, %3\n"
" cbnz %w2, 1b\n"
/* Did we get the lock? */
" eor %w1, %w0, %w0, ror #16\n"
" cbz %w1, 3f\n"
/*
* No: spin on the owner. Send a local event to avoid missing an
* unlock before the exclusive load.
*/
" sevl\n"
"2: wfe\n"
" ldaxrh %w2, %4\n"
" eor %w1, %w2, %w0, lsr #16\n"
" cbnz %w1, 2b\n"
/* We got the lock. Critical section starts here. */
"3:"
: "=&r" (lockval), "=&r" (newval), "=&r" (tmp), "+Q" (*lock)
: "Q" (lock->owner), "I" (1 << TICKET_SHIFT)
: "memory");
}
/* spinlock lock & interrupt disable & PSTATE.DAIF save */
#ifdef DEBUG_SPINLOCK
#define ihk_mc_spinlock_lock(l) ({ unsigned long rc;\
__kprintf("[%d] call ihk_mc_spinlock_lock %p %s:%d\n", ihk_mc_get_processor_id(), (l), __FILE__, __LINE__); \
rc = __ihk_mc_spinlock_lock(l);\
__kprintf("[%d] ret ihk_mc_spinlock_lock\n", ihk_mc_get_processor_id()); rc;\
})
#else
#define ihk_mc_spinlock_lock __ihk_mc_spinlock_lock
#endif
static unsigned long __ihk_mc_spinlock_lock(ihk_spinlock_t *lock)
{
unsigned long flags;
flags = cpu_disable_interrupt_save();
__ihk_mc_spinlock_lock_noirq(lock);
return flags;
}
/* @ref.impl arch/arm64/include/asm/spinlock.h::arch_spin_unlock */
/* spinlock unlock */
#ifdef DEBUG_SPINLOCK
#define ihk_mc_spinlock_unlock_noirq(l) { \
__kprintf("[%d] call ihk_mc_spinlock_unlock_noirq %p %s:%d\n", ihk_mc_get_processor_id(), (l), __FILE__, __LINE__); \
__ihk_mc_spinlock_unlock_noirq(l); \
__kprintf("[%d] ret ihk_mc_spinlock_unlock_noirq\n", ihk_mc_get_processor_id()); \
}
#else
#define ihk_mc_spinlock_unlock_noirq __ihk_mc_spinlock_unlock_noirq
#endif
static void __ihk_mc_spinlock_unlock_noirq(ihk_spinlock_t *lock)
{
asm volatile(
" stlrh %w1, %0\n"
: "=Q" (lock->owner)
: "r" (lock->owner + 1)
: "memory");
preempt_enable();
}
/* spinlock unlock & restore PSTATE.DAIF */
#ifdef DEBUG_SPINLOCK
#define ihk_mc_spinlock_unlock(l, f) { \
__kprintf("[%d] call ihk_mc_spinlock_unlock %p %s:%d\n", ihk_mc_get_processor_id(), (l), __FILE__, __LINE__); \
__ihk_mc_spinlock_unlock((l), (f)); \
__kprintf("[%d] ret ihk_mc_spinlock_unlock\n", ihk_mc_get_processor_id()); \
}
#else
#define ihk_mc_spinlock_unlock __ihk_mc_spinlock_unlock
#endif
static void __ihk_mc_spinlock_unlock(ihk_spinlock_t *lock, unsigned long flags)
{
__ihk_mc_spinlock_unlock_noirq(lock);
cpu_restore_interrupt(flags);
}
/* An implementation of the Mellor-Crummey Scott (MCS) lock */
typedef struct mcs_lock_node {
unsigned long locked;
struct mcs_lock_node *next;
unsigned long irqsave;
} __attribute__((aligned(64))) mcs_lock_node_t;
static void mcs_lock_init(struct mcs_lock_node *node)
{
node->locked = 0;
node->next = NULL;
}
static void __mcs_lock_lock(struct mcs_lock_node *lock,
struct mcs_lock_node *node)
{
struct mcs_lock_node *pred;
node->next = NULL;
node->locked = 0;
pred = xchg8(&(lock->next), node);
if (pred) {
node->locked = 1;
pred->next = node;
while (node->locked != 0) {
cpu_pause();
}
}
}
static void __mcs_lock_unlock(struct mcs_lock_node *lock,
struct mcs_lock_node *node)
{
if (node->next == NULL) {
struct mcs_lock_node *old = atomic_cmpxchg8(&(lock->next), node, 0);
if (old == node) {
return;
}
while (node->next == NULL) {
cpu_pause();
}
}
node->next->locked = 0;
}
static void mcs_lock_lock_noirq(struct mcs_lock_node *lock,
struct mcs_lock_node *node)
{
preempt_disable();
__mcs_lock_lock(lock, node);
}
static void mcs_lock_unlock_noirq(struct mcs_lock_node *lock,
struct mcs_lock_node *node)
{
__mcs_lock_unlock(lock, node);
preempt_enable();
}
static void mcs_lock_lock(struct mcs_lock_node *lock,
struct mcs_lock_node *node)
{
node->irqsave = cpu_disable_interrupt_save();
mcs_lock_lock_noirq(lock, node);
}
static void mcs_lock_unlock(struct mcs_lock_node *lock,
struct mcs_lock_node *node)
{
mcs_lock_unlock_noirq(lock, node);
cpu_restore_interrupt(node->irqsave);
}
#define SPINLOCK_IN_MCS_RWLOCK
// reader/writer lock
typedef struct mcs_rwlock_node {
ihk_atomic_t count; // num of readers (use only common reader)
char type; // lock type
#define MCS_RWLOCK_TYPE_COMMON_READER 0
#define MCS_RWLOCK_TYPE_READER 1
#define MCS_RWLOCK_TYPE_WRITER 2
char locked; // lock
#define MCS_RWLOCK_LOCKED 1
#define MCS_RWLOCK_UNLOCKED 0
char dmy1; // unused
char dmy2; // unused
struct mcs_rwlock_node *next;
} __attribute__((aligned(64))) mcs_rwlock_node_t;
typedef struct mcs_rwlock_node_irqsave {
#ifndef SPINLOCK_IN_MCS_RWLOCK
struct mcs_rwlock_node node;
#endif
unsigned long irqsave;
} __attribute__((aligned(64))) mcs_rwlock_node_irqsave_t;
typedef struct mcs_rwlock_lock {
#ifdef SPINLOCK_IN_MCS_RWLOCK
ihk_spinlock_t slock;
#else
struct mcs_rwlock_node reader; /* common reader lock */
struct mcs_rwlock_node *node; /* base */
#endif
} __attribute__((aligned(64))) mcs_rwlock_lock_t;
static void
mcs_rwlock_init(struct mcs_rwlock_lock *lock)
{
#ifdef SPINLOCK_IN_MCS_RWLOCK
ihk_mc_spinlock_init(&lock->slock);
#else
ihk_atomic_set(&lock->reader.count, 0);
lock->reader.type = MCS_RWLOCK_TYPE_COMMON_READER;
lock->node = NULL;
#endif
}
#ifdef DEBUG_MCS_RWLOCK
#define mcs_rwlock_writer_lock_noirq(l, n) { \
__kprintf("[%d] call mcs_rwlock_writer_lock_noirq %p %s:%d\n", ihk_mc_get_processor_id(), (l), __FILE__, __LINE__); \
__mcs_rwlock_writer_lock_noirq((l), (n)); \
__kprintf("[%d] ret mcs_rwlock_writer_lock_noirq\n", ihk_mc_get_processor_id()); \
}
#else
#define mcs_rwlock_writer_lock_noirq __mcs_rwlock_writer_lock_noirq
#endif
static void
__mcs_rwlock_writer_lock_noirq(struct mcs_rwlock_lock *lock, struct mcs_rwlock_node *node)
{
#ifdef SPINLOCK_IN_MCS_RWLOCK
ihk_mc_spinlock_lock_noirq(&lock->slock);
#else
struct mcs_rwlock_node *pred;
preempt_disable();
node->type = MCS_RWLOCK_TYPE_WRITER;
node->next = NULL;
pred = xchg8(&(lock->node), node);
if (pred) {
node->locked = MCS_RWLOCK_LOCKED;
pred->next = node;
while (node->locked != MCS_RWLOCK_UNLOCKED) {
cpu_pause();
}
}
#endif
}
#ifndef SPINLOCK_IN_MCS_RWLOCK
static void
mcs_rwlock_unlock_readers(struct mcs_rwlock_lock *lock)
{
struct mcs_rwlock_node *p;
struct mcs_rwlock_node *f = NULL;
struct mcs_rwlock_node *n;
int breakf = 0;
ihk_atomic_inc(&lock->reader.count); // protect to unlock reader
for(p = &lock->reader; p->next; p = n){
n = p->next;
if(p->next->type == MCS_RWLOCK_TYPE_READER){
p->next = n->next;
if(lock->node == n){
struct mcs_rwlock_node *old;
old = atomic_cmpxchg8(&(lock->node), n, p);
if(old != n){ // couldn't change
while (n->next == NULL) {
cpu_pause();
}
p->next = n->next;
}
else{
breakf = 1;
}
}
else if(p->next == NULL){
while (n->next == NULL) {
cpu_pause();
}
p->next = n->next;
}
if(f){
ihk_atomic_inc(&lock->reader.count);
n->locked = MCS_RWLOCK_UNLOCKED;
}
else
f = n;
n = p;
if(breakf)
break;
}
if(n->next == NULL && lock->node != n){
while (n->next == NULL && lock->node != n) {
cpu_pause();
}
}
}
f->locked = MCS_RWLOCK_UNLOCKED;
}
#endif
#ifdef DEBUG_MCS_RWLOCK
#define mcs_rwlock_writer_unlock_noirq(l, n) { \
__kprintf("[%d] call mcs_rwlock_writer_unlock_noirq %p %s:%d\n", ihk_mc_get_processor_id(), (l), __FILE__, __LINE__); \
__mcs_rwlock_writer_unlock_noirq((l), (n)); \
__kprintf("[%d] ret mcs_rwlock_writer_unlock_noirq\n", ihk_mc_get_processor_id()); \
}
#else
#define mcs_rwlock_writer_unlock_noirq __mcs_rwlock_writer_unlock_noirq
#endif
static void
__mcs_rwlock_writer_unlock_noirq(struct mcs_rwlock_lock *lock, struct mcs_rwlock_node *node)
{
#ifdef SPINLOCK_IN_MCS_RWLOCK
ihk_mc_spinlock_unlock_noirq(&lock->slock);
#else
if (node->next == NULL) {
struct mcs_rwlock_node *old = atomic_cmpxchg8(&(lock->node), node, 0);
if (old == node) {
goto out;
}
while (node->next == NULL) {
cpu_pause();
}
}
if(node->next->type == MCS_RWLOCK_TYPE_READER){
lock->reader.next = node->next;
mcs_rwlock_unlock_readers(lock);
}
else{
node->next->locked = MCS_RWLOCK_UNLOCKED;
}
out:
preempt_enable();
#endif
}
#ifdef DEBUG_MCS_RWLOCK
#define mcs_rwlock_reader_lock_noirq(l, n) { \
__kprintf("[%d] call mcs_rwlock_reader_lock_noirq %p %s:%d\n", ihk_mc_get_processor_id(), (l), __FILE__, __LINE__); \
__mcs_rwlock_reader_lock_noirq((l), (n)); \
__kprintf("[%d] ret mcs_rwlock_reader_lock_noirq\n", ihk_mc_get_processor_id()); \
}
#else
#define mcs_rwlock_reader_lock_noirq __mcs_rwlock_reader_lock_noirq
#endif
static inline unsigned int
atomic_inc_ifnot0(ihk_atomic_t *v)
{
unsigned int *p = (unsigned int *)(&(v)->counter);
unsigned int old;
unsigned int new;
unsigned int val;
do{
if(!(old = *p))
break;
new = old + 1;
val = atomic_cmpxchg4(p, old, new);
}while(val != old);
return old;
}
static void
__mcs_rwlock_reader_lock_noirq(struct mcs_rwlock_lock *lock, struct mcs_rwlock_node *node)
{
#ifdef SPINLOCK_IN_MCS_RWLOCK
ihk_mc_spinlock_lock_noirq(&lock->slock);
#else
struct mcs_rwlock_node *pred;
preempt_disable();
node->type = MCS_RWLOCK_TYPE_READER;
node->next = NULL;
node->dmy1 = ihk_mc_get_processor_id();
pred = xchg8(&(lock->node), node);
if (pred) {
if(pred == &lock->reader){
if(atomic_inc_ifnot0(&pred->count)){
struct mcs_rwlock_node *old;
old = atomic_cmpxchg8(&(lock->node), node, pred);
if (old == node) {
goto out;
}
while (node->next == NULL) {
cpu_pause();
}
node->locked = MCS_RWLOCK_LOCKED;
lock->reader.next = node;
mcs_rwlock_unlock_readers(lock);
ihk_atomic_dec(&pred->count);
goto out;
}
}
node->locked = MCS_RWLOCK_LOCKED;
pred->next = node;
while (node->locked != MCS_RWLOCK_UNLOCKED) {
cpu_pause();
}
}
else {
lock->reader.next = node;
mcs_rwlock_unlock_readers(lock);
}
out:
return;
#endif
}
#ifdef DEBUG_MCS_RWLOCK
#define mcs_rwlock_reader_unlock_noirq(l, n) { \
__kprintf("[%d] call mcs_rwlock_reader_unlock_noirq %p %s:%d\n", ihk_mc_get_processor_id(), (l), __FILE__, __LINE__); \
__mcs_rwlock_reader_unlock_noirq((l), (n)); \
__kprintf("[%d] ret mcs_rwlock_reader_unlock_noirq\n", ihk_mc_get_processor_id()); \
}
#else
#define mcs_rwlock_reader_unlock_noirq __mcs_rwlock_reader_unlock_noirq
#endif
static void
__mcs_rwlock_reader_unlock_noirq(struct mcs_rwlock_lock *lock, struct mcs_rwlock_node *node)
{
#ifdef SPINLOCK_IN_MCS_RWLOCK
ihk_mc_spinlock_unlock_noirq(&lock->slock);
#else
if(ihk_atomic_dec_return(&lock->reader.count))
goto out;
if (lock->reader.next == NULL) {
struct mcs_rwlock_node *old;
old = atomic_cmpxchg8(&(lock->node), &(lock->reader), 0);
if (old == &lock->reader) {
goto out;
}
while (lock->reader.next == NULL) {
cpu_pause();
}
}
if(lock->reader.next->type == MCS_RWLOCK_TYPE_READER){
mcs_rwlock_unlock_readers(lock);
}
else{
lock->reader.next->locked = MCS_RWLOCK_UNLOCKED;
}
out:
preempt_enable();
#endif
}
#ifdef DEBUG_MCS_RWLOCK
#define mcs_rwlock_writer_lock(l, n) { \
__kprintf("[%d] call mcs_rwlock_writer_lock %p %s:%d\n", ihk_mc_get_processor_id(), (l), __FILE__, __LINE__); \
__mcs_rwlock_writer_lock((l), (n)); \
__kprintf("[%d] ret mcs_rwlock_writer_lock\n", ihk_mc_get_processor_id()); \
}
#else
#define mcs_rwlock_writer_lock __mcs_rwlock_writer_lock
#endif
static void
__mcs_rwlock_writer_lock(struct mcs_rwlock_lock *lock, struct mcs_rwlock_node_irqsave *node)
{
#ifdef SPINLOCK_IN_MCS_RWLOCK
node->irqsave = ihk_mc_spinlock_lock(&lock->slock);
#else
node->irqsave = cpu_disable_interrupt_save();
__mcs_rwlock_writer_lock_noirq(lock, &node->node);
#endif
}
#ifdef DEBUG_MCS_RWLOCK
#define mcs_rwlock_writer_unlock(l, n) { \
__kprintf("[%d] call mcs_rwlock_writer_unlock %p %s:%d\n", ihk_mc_get_processor_id(), (l), __FILE__, __LINE__); \
__mcs_rwlock_writer_unlock((l), (n)); \
__kprintf("[%d] ret mcs_rwlock_writer_unlock\n", ihk_mc_get_processor_id()); \
}
#else
#define mcs_rwlock_writer_unlock __mcs_rwlock_writer_unlock
#endif
static void
__mcs_rwlock_writer_unlock(struct mcs_rwlock_lock *lock, struct mcs_rwlock_node_irqsave *node)
{
#ifdef SPINLOCK_IN_MCS_RWLOCK
ihk_mc_spinlock_unlock(&lock->slock, node->irqsave);
#else
__mcs_rwlock_writer_unlock_noirq(lock, &node->node);
cpu_restore_interrupt(node->irqsave);
#endif
}
#ifdef DEBUG_MCS_RWLOCK
#define mcs_rwlock_reader_lock(l, n) { \
__kprintf("[%d] call mcs_rwlock_reader_lock %p %s:%d\n", ihk_mc_get_processor_id(), (l), __FILE__, __LINE__); \
__mcs_rwlock_reader_lock((l), (n)); \
__kprintf("[%d] ret mcs_rwlock_reader_lock\n", ihk_mc_get_processor_id()); \
}
#else
#define mcs_rwlock_reader_lock __mcs_rwlock_reader_lock
#endif
static void
__mcs_rwlock_reader_lock(struct mcs_rwlock_lock *lock, struct mcs_rwlock_node_irqsave *node)
{
#ifdef SPINLOCK_IN_MCS_RWLOCK
node->irqsave = ihk_mc_spinlock_lock(&lock->slock);
#else
node->irqsave = cpu_disable_interrupt_save();
__mcs_rwlock_reader_lock_noirq(lock, &node->node);
#endif
}
#ifdef DEBUG_MCS_RWLOCK
#define mcs_rwlock_reader_unlock(l, n) { \
__kprintf("[%d] call mcs_rwlock_reader_unlock %p %s:%d\n", ihk_mc_get_processor_id(), (l), __FILE__, __LINE__); \
__mcs_rwlock_reader_unlock((l), (n)); \
__kprintf("[%d] ret mcs_rwlock_reader_unlock\n", ihk_mc_get_processor_id()); \
}
#else
#define mcs_rwlock_reader_unlock __mcs_rwlock_reader_unlock
#endif
static void
__mcs_rwlock_reader_unlock(struct mcs_rwlock_lock *lock, struct mcs_rwlock_node_irqsave *node)
{
#ifdef SPINLOCK_IN_MCS_RWLOCK
ihk_mc_spinlock_unlock(&lock->slock, node->irqsave);
#else
__mcs_rwlock_reader_unlock_noirq(lock, &node->node);
cpu_restore_interrupt(node->irqsave);
#endif
}
#endif /* !__HEADER_ARM64_COMMON_ARCH_LOCK_H */

View File

@ -0,0 +1,489 @@
/* arch-memory.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
#ifndef __HEADER_ARM64_COMMON_ARCH_MEMORY_H
#define __HEADER_ARM64_COMMON_ARCH_MEMORY_H
#include <const.h>
#define _SZ4KB (1UL<<12)
#define _SZ16KB (1UL<<14)
#define _SZ64KB (1UL<<16)
#ifdef CONFIG_ARM64_64K_PAGES
# define GRANULE_SIZE _SZ64KB
#else
# define GRANULE_SIZE _SZ4KB
#endif
#define VA_BITS CONFIG_ARM64_VA_BITS
/*
* Address define
*/
#define MAP_KERNEL_SHIFT 21
#define MAP_KERNEL_SIZE (UL(1) << MAP_KERNEL_SHIFT)
#define MAP_EARLY_ALLOC_SHIFT 9
#define MAP_EARLY_ALLOC_SIZE (UL(1) << (PAGE_SHIFT + MAP_EARLY_ALLOC_SHIFT))
#define MAP_BOOT_PARAM_SHIFT 21
#define MAP_BOOT_PARAM_SIZE (UL(1) << MAP_BOOT_PARAM_SHIFT)
#if (VA_BITS == 39 && GRANULE_SIZE == _SZ4KB)
#
# define TASK_UNMAPPED_BASE UL(0x0000000800000000)
# define USER_END UL(0x0000002000000000)
# define MAP_VMAP_START UL(0xffffffbdc0000000)
# define MAP_VMAP_SIZE UL(0x0000000100000000)
# define MAP_FIXED_START UL(0xffffffbffbdfd000)
# define MAP_ST_START UL(0xffffffc000000000)
# define MAP_KERNEL_START UL(0xffffffffff800000) // 0xffff_ffff_ff80_0000
# define MAP_ST_SIZE (MAP_KERNEL_START - MAP_ST_START) // 0x0000_003f_ff80_0000
# define MAP_EARLY_ALLOC (MAP_KERNEL_START + MAP_KERNEL_SIZE) // 0xffff_ffff_ffa0_0000
# define MAP_EARLY_ALLOC_END (MAP_EARLY_ALLOC + MAP_EARLY_ALLOC_SIZE)
# define MAP_BOOT_PARAM (MAP_EARLY_ALLOC_END) // 0xffff_ffff_ffc0_0000
# define MAP_BOOT_PARAM_END (MAP_BOOT_PARAM + MAP_BOOT_PARAM_SIZE) // 0xffff_ffff_ffe0_0000
#
#elif (VA_BITS == 42 && GRANULE_SIZE == _SZ64KB)
#
# define TASK_UNMAPPED_BASE UL(0x0000004000000000)
# define USER_END UL(0x0000010000000000)
# define MAP_VMAP_START UL(0xfffffdfee0000000)
# define MAP_VMAP_SIZE UL(0x0000000100000000)
# define MAP_FIXED_START UL(0xfffffdfffbdd0000)
# define MAP_ST_START UL(0xfffffe0000000000)
# define MAP_KERNEL_START UL(0xffffffffe0000000) // 0xffff_ffff_e000_0000
# define MAP_ST_SIZE (MAP_KERNEL_START - MAP_ST_START) // 0x0000_01ff_e000_0000
# define MAP_EARLY_ALLOC (MAP_KERNEL_START + MAP_KERNEL_SIZE) // 0xffff_ffff_e020_0000
# define MAP_EARLY_ALLOC_END (MAP_EARLY_ALLOC + MAP_EARLY_ALLOC_SIZE)
# define MAP_BOOT_PARAM (MAP_EARLY_ALLOC_END) // 0xffff_ffff_e220_0000
# define MAP_BOOT_PARAM_END (MAP_BOOT_PARAM + MAP_BOOT_PARAM_SIZE) // 0xffff_ffff_e240_0000
#
#elif (VA_BITS == 48 && GRANULE_SIZE == _SZ4KB)
#
# define TASK_UNMAPPED_BASE UL(0x0000100000000000)
# define USER_END UL(0x0000400000000000)
# define MAP_VMAP_START UL(0xffff7bffc0000000)
# define MAP_VMAP_SIZE UL(0x0000000100000000)
# define MAP_FIXED_START UL(0xffff7ffffbdfd000)
# define MAP_ST_START UL(0xffff800000000000)
# define MAP_KERNEL_START UL(0xffffffffff800000) // 0xffff_ffff_ff80_0000
# define MAP_ST_SIZE (MAP_KERNEL_START - MAP_ST_START) // 0x0000_7fff_ff80_0000
# define MAP_EARLY_ALLOC (MAP_KERNEL_START + MAP_KERNEL_SIZE) // 0xffff_ffff_ffa0_0000
# define MAP_EARLY_ALLOC_END (MAP_EARLY_ALLOC + MAP_EARLY_ALLOC_SIZE)
# define MAP_BOOT_PARAM (MAP_EARLY_ALLOC_END) // 0xffff_ffff_ffc0_0000
# define MAP_BOOT_PARAM_END (MAP_BOOT_PARAM + MAP_BOOT_PARAM_SIZE) // 0xffff_ffff_ffe0_0000
#
#
#elif (VA_BITS == 48 && GRANULE_SIZE == _SZ64KB)
#
# define TASK_UNMAPPED_BASE UL(0x0000100000000000)
# define USER_END UL(0x0000400000000000)
# define MAP_VMAP_START UL(0xffff780000000000)
# define MAP_VMAP_SIZE UL(0x0000000100000000)
# define MAP_FIXED_START UL(0xffff7ffffbdd0000)
# define MAP_ST_START UL(0xffff800000000000)
# define MAP_KERNEL_START UL(0xffffffffe0000000) // 0xffff_ffff_e000_0000
# define MAP_ST_SIZE (MAP_KERNEL_START - MAP_ST_START) // 0x0000_7fff_e000_0000
# define MAP_EARLY_ALLOC (MAP_KERNEL_START + MAP_KERNEL_SIZE) // 0xffff_ffff_e020_0000
# define MAP_EARLY_ALLOC_END (MAP_EARLY_ALLOC + MAP_EARLY_ALLOC_SIZE)
# define MAP_BOOT_PARAM (MAP_EARLY_ALLOC_END) // 0xffff_ffff_e220_0000
# define MAP_BOOT_PARAM_END (MAP_BOOT_PARAM + MAP_BOOT_PARAM_SIZE) // 0xffff_ffff_e240_0000
#
#else
# error address space is not defined.
#endif
#define STACK_TOP(region) ((region)->user_end)
/*
* pagetable define
*/
#if GRANULE_SIZE == _SZ4KB
# define __PTL4_SHIFT 39
# define __PTL3_SHIFT 30
# define __PTL2_SHIFT 21
# define __PTL1_SHIFT 12
# define PTL4_INDEX_MASK ((UL(1) << 9) - 1)
# define PTL3_INDEX_MASK PTL4_INDEX_MASK
# define PTL2_INDEX_MASK PTL3_INDEX_MASK
# define PTL1_INDEX_MASK PTL2_INDEX_MASK
# define FIRST_LEVEL_BLOCK_SUPPORT 1
#elif GRANULE_SIZE == _SZ16KB
# define __PTL4_SHIFT 47
# define __PTL3_SHIFT 36
# define __PTL2_SHIFT 25
# define __PTL1_SHIFT 14
# define PTL4_INDEX_MASK ((UL(1) << 1) - 1)
# define PTL3_INDEX_MASK ((UL(1) << 11) - 1)
# define PTL2_INDEX_MASK PTL3_INDEX_MASK
# define PTL1_INDEX_MASK PTL2_INDEX_MASK
# define FIRST_LEVEL_BLOCK_SUPPORT 0
#elif GRANULE_SIZE == _SZ64KB
# define __PTL4_SHIFT 0
# define __PTL3_SHIFT 42
# define __PTL2_SHIFT 29
# define __PTL1_SHIFT 16
# define PTL4_INDEX_MASK 0
# define PTL3_INDEX_MASK ((UL(1) << 6) - 1)
# define PTL2_INDEX_MASK ((UL(1) << 13) - 1)
# define PTL1_INDEX_MASK PTL2_INDEX_MASK
# define FIRST_LEVEL_BLOCK_SUPPORT 0
#else
# error granule size error.
#endif
# define __PTL4_SIZE (UL(1) << __PTL4_SHIFT)
# define __PTL3_SIZE (UL(1) << __PTL3_SHIFT)
# define __PTL2_SIZE (UL(1) << __PTL2_SHIFT)
# define __PTL1_SIZE (UL(1) << __PTL1_SHIFT)
# define __PTL4_MASK (~__PTL4_SIZE - 1)
# define __PTL3_MASK (~__PTL3_SIZE - 1)
# define __PTL2_MASK (~__PTL2_SIZE - 1)
# define __PTL1_MASK (~__PTL1_SIZE - 1)
/* calculate entries */
#if (CONFIG_ARM64_PGTABLE_LEVELS > 3) && (VA_BITS > __PTL4_SHIFT)
# define __PTL4_ENTRIES (UL(1) << (VA_BITS - __PTL4_SHIFT))
# define __PTL3_ENTRIES (UL(1) << (__PTL1_SHIFT - 3))
# define __PTL2_ENTRIES (UL(1) << (__PTL1_SHIFT - 3))
# define __PTL1_ENTRIES (UL(1) << (__PTL1_SHIFT - 3))
#elif (CONFIG_ARM64_PGTABLE_LEVELS > 2) && (VA_BITS > __PTL3_SHIFT)
# define __PTL4_ENTRIES 1
# define __PTL3_ENTRIES (UL(1) << (VA_BITS - __PTL3_SHIFT))
# define __PTL2_ENTRIES (UL(1) << (__PTL1_SHIFT - 3))
# define __PTL1_ENTRIES (UL(1) << (__PTL1_SHIFT - 3))
#elif (CONFIG_ARM64_PGTABLE_LEVELS > 1) && (VA_BITS > __PTL2_SHIFT)
# define __PTL4_ENTRIES 1
# define __PTL3_ENTRIES 1
# define __PTL2_ENTRIES (UL(1) << (VA_BITS - __PTL2_SHIFT))
# define __PTL1_ENTRIES (UL(1) << (__PTL1_SHIFT - 3))
#elif VA_BITS > __PTL1_SHIFT
# define __PTL4_ENTRIES 1
# define __PTL3_ENTRIES 1
# define __PTL2_ENTRIES 1
# define __PTL1_ENTRIES (UL(1) << (VA_BITS - __PTL1_SHIFT))
#else
# define __PTL4_ENTRIES 1
# define __PTL3_ENTRIES 1
# define __PTL2_ENTRIES 1
# define __PTL1_ENTRIES 1
#endif
#ifndef __ASSEMBLY__
static const unsigned int PTL4_SHIFT = __PTL4_SHIFT;
static const unsigned int PTL3_SHIFT = __PTL3_SHIFT;
static const unsigned int PTL2_SHIFT = __PTL2_SHIFT;
static const unsigned int PTL1_SHIFT = __PTL1_SHIFT;
static const unsigned long PTL4_SIZE = __PTL4_SIZE;
static const unsigned long PTL3_SIZE = __PTL3_SIZE;
static const unsigned long PTL2_SIZE = __PTL2_SIZE;
static const unsigned long PTL1_SIZE = __PTL1_SIZE;
static const unsigned long PTL4_MASK = __PTL4_MASK;
static const unsigned long PTL3_MASK = __PTL3_MASK;
static const unsigned long PTL2_MASK = __PTL2_MASK;
static const unsigned long PTL1_MASK = __PTL1_MASK;
static const unsigned int PTL4_ENTRIES = __PTL4_ENTRIES;
static const unsigned int PTL3_ENTRIES = __PTL3_ENTRIES;
static const unsigned int PTL2_ENTRIES = __PTL2_ENTRIES;
static const unsigned int PTL1_ENTRIES = __PTL1_ENTRIES;
#else
# define PTL4_SHIFT __PTL4_SHIFT
# define PTL3_SHIFT __PTL3_SHIFT
# define PTL2_SHIFT __PTL2_SHIFT
# define PTL1_SHIFT __PTL1_SHIFT
# define PTL4_SIZE __PTL4_SIZE
# define PTL3_SIZE __PTL3_SIZE
# define PTL2_SIZE __PTL2_SIZE
# define PTL1_SIZE __PTL1_SIZE
# define PTL4_MASK __PTL4_MASK
# define PTL3_MASK __PTL3_MASK
# define PTL2_MASK __PTL2_MASK
# define PTL1_MASK __PTL1_MASK
# define PTL4_ENTRIES __PTL4_ENTRIES
# define PTL3_ENTRIES __PTL3_ENTRIES
# define PTL2_ENTRIES __PTL2_ENTRIES
# define PTL1_ENTRIES __PTL1_ENTRIES
#endif/*__ASSEMBLY__*/
#define __page_offset(addr, size) ((unsigned long)(addr) & ((size) - 1))
#define __page_align(addr, size) ((unsigned long)(addr) & ~((size) - 1))
#define __page_align_up(addr, size) __page_align((unsigned long)(addr) + (size) - 1, size)
/*
* nornal page
*/
#define PAGE_SHIFT __PTL1_SHIFT
#define PAGE_SIZE (UL(1) << __PTL1_SHIFT)
#define PAGE_MASK (~(PTL1_SIZE - 1))
#define PAGE_P2ALIGN 0
#define page_offset(addr) __page_offset(addr, PAGE_SIZE)
#define page_align(addr) __page_align(addr, PAGE_SIZE)
#define page_align_up(addr) __page_align_up(addr, PAGE_SIZE)
/*
* large page
*/
#define LARGE_PAGE_SHIFT __PTL2_SHIFT
#define LARGE_PAGE_SIZE (UL(1) << __PTL2_SHIFT)
#define LARGE_PAGE_MASK (~(PTL2_SIZE - 1))
#define LARGE_PAGE_P2ALIGN (LARGE_PAGE_SHIFT - PAGE_SHIFT)
#define large_page_offset(addr) __page_offset(addr, LARGE_PAGE_SIZE)
#define large_page_align(addr) __page_align(addr, LARGE_PAGE_SIZE)
#define large_page_align_up(addr) __page_align_up(addr, LARGE_PAGE_SIZE)
/*
*
*/
#define TTBR_ASID_SHIFT 48
#define TTBR_ASID_MASK (0xFFFFUL << TTBR_ASID_SHIFT)
#define TTBR_BADDR_MASK (~TTBR_ASID_MASK)
#include "pgtable-hwdef.h"
#define KERNEL_PHYS_OFFSET
#define PT_PHYSMASK PHYS_MASK
/* We allow user programs to access all the memory (D_Block, D_Page) */
#define PFL_KERN_BLK_ATTR PROT_SECT_NORMAL_EXEC
#define PFL_KERN_PAGE_ATTR PAGE_KERNEL_EXEC
/* for the page table entry that points another page table (D_Table) */
#define PFL_PDIR_TBL_ATTR PMD_TYPE_TABLE
#ifdef CONFIG_ARM64_64K_PAGES
# define SWAPPER_PGTABLE_LEVELS (CONFIG_ARM64_PGTABLE_LEVELS)
#else
# define SWAPPER_PGTABLE_LEVELS (CONFIG_ARM64_PGTABLE_LEVELS - 1)
#endif
#define SWAPPER_DIR_SIZE (SWAPPER_PGTABLE_LEVELS * PAGE_SIZE)
#define IDMAP_DIR_SIZE (3 * PAGE_SIZE)
/* [Page level Write Throgh] ページキャッシュ方式 0:ライトバック 1:ライトスルー */
#define PFL1_PWT 0 //< DEBUG_ARCH_DEP, devobj.cの直接参照を関数化 (is_pte_pwd)
/* [Page level Cache Disable] ページキャッシュ 0:有効 1:無効 */
#define PFL1_PCD 0 //< DEBUG_ARCH_DEP, devobj.cの直接参照を関数化 (is_pte_pcd)
#define PTE_NULL (0)
#define PTE_FILEOFF PTE_SPECIAL
#define PT_ENTRIES (PAGE_SIZE >> 3)
#ifndef __ASSEMBLY__
#include <ihk/types.h>
typedef unsigned long pte_t;
/*
* pagemap kernel ABI bits
*/
#define PM_ENTRY_BYTES sizeof(uint64_t)
#define PM_STATUS_BITS 3
#define PM_STATUS_OFFSET (64 - PM_STATUS_BITS)
#define PM_STATUS_MASK (((1LL << PM_STATUS_BITS) - 1) << PM_STATUS_OFFSET)
#define PM_STATUS(nr) (((nr) << PM_STATUS_OFFSET) & PM_STATUS_MASK)
#define PM_PSHIFT_BITS 6
#define PM_PSHIFT_OFFSET (PM_STATUS_OFFSET - PM_PSHIFT_BITS)
#define PM_PSHIFT_MASK (((1LL << PM_PSHIFT_BITS) - 1) << PM_PSHIFT_OFFSET)
#define PM_PSHIFT(x) (((uint64_t) (x) << PM_PSHIFT_OFFSET) & PM_PSHIFT_MASK)
#define PM_PFRAME_MASK ((1LL << PM_PSHIFT_OFFSET) - 1)
#define PM_PFRAME(x) ((x) & PM_PFRAME_MASK)
#define PM_PRESENT PM_STATUS(4LL)
#define PM_SWAP PM_STATUS(2LL)
/* For easy conversion, it is better to be the same as architecture's ones */
enum ihk_mc_pt_attribute {
/* ページが物理メモリにロードされているか */
PTATTR_ACTIVE = PTE_VALID,
/* Read/Writeフラグ */
PTATTR_WRITABLE = PTE_RDONLY, //共通定義と意味が反転するので注意
/* ユーザ/特権フラグ */
PTATTR_USER = PTE_USER | PTE_NG,
/* ページの変更を示す */
PTATTR_DIRTY = PTE_DIRTY,
/* ラージページを示す */
PTATTR_LARGEPAGE = PMD_TABLE_BIT, //共通定義と意味が反転するので注意
/* remap_file_page フラグ */
PTATTR_FILEOFF = PTE_FILEOFF,
/* 実行不可フラグ */
PTATTR_NO_EXECUTE = PTE_UXN,
/* キャッシュ無し */
PTATTR_UNCACHABLE = PTE_ATTRINDX(1),
/* ユーザ空間向けを示す */
PTATTR_FOR_USER = UL(1) << (PHYS_MASK_SHIFT - 1),
/* WriteCombine */
PTATTR_WRITE_COMBINED = PTE_ATTRINDX(2),
};
extern enum ihk_mc_pt_attribute attr_mask;
static inline int pfn_is_write_combined(uintptr_t pfn)
{
return ((pfn & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_NC));
}
//共通部と意味がするビット定義
#define attr_flip_bits (PTATTR_WRITABLE | PTATTR_LARGEPAGE)
static inline int pte_is_type_page(const pte_t *ptep, size_t pgsize)
{
int ret = 0; //default D_TABLE
if ((PTL4_SIZE == pgsize && CONFIG_ARM64_PGTABLE_LEVELS > 3) ||
(PTL3_SIZE == pgsize && CONFIG_ARM64_PGTABLE_LEVELS > 2) ||
(PTL2_SIZE == pgsize)) {
// check D_BLOCK
ret = ((*ptep & PMD_TYPE_MASK) == PMD_TYPE_SECT);
}
else if (PTL1_SIZE == pgsize) {
// check D_PAGE
ret = ((*ptep & PTE_TYPE_MASK) == PTE_TYPE_PAGE);
}
return ret;
}
static inline int pte_is_null(pte_t *ptep)
{
return (*ptep == PTE_NULL);
}
static inline int pte_is_present(pte_t *ptep)
{
return !!(*ptep & PMD_SECT_VALID);
}
static inline int pte_is_writable(pte_t *ptep)
{
extern int kprintf(const char *format, ...);
kprintf("ERROR: %s is not implemented. \n", __func__);
return 0;
}
static inline int pte_is_dirty(pte_t *ptep, size_t pgsize)
{
int ret = 0;
int do_check = pte_is_type_page(ptep, pgsize);
if (do_check) {
ret = !!(*ptep & PTE_DIRTY);
}
return ret;
}
static inline int pte_is_fileoff(pte_t *ptep, size_t pgsize)
{
int ret = 0;
int do_check = pte_is_type_page(ptep, pgsize);
if (do_check) {
ret = !!(*ptep & PTE_FILEOFF);
}
return ret;
}
static inline void pte_update_phys(pte_t *ptep, unsigned long phys)
{
*ptep = (*ptep & ~PT_PHYSMASK) | (phys & PT_PHYSMASK);
}
static inline uintptr_t pte_get_phys(pte_t *ptep)
{
return (uintptr_t)(*ptep & PT_PHYSMASK);
}
static inline off_t pte_get_off(pte_t *ptep, size_t pgsize)
{
return (off_t)(*ptep & PHYS_MASK);
}
static inline enum ihk_mc_pt_attribute pte_get_attr(pte_t *ptep, size_t pgsize)
{
enum ihk_mc_pt_attribute attr;
attr = *ptep & attr_mask;
attr ^= attr_flip_bits;
if ((*ptep & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_DEVICE_nGnRE)) {
attr |= PTATTR_UNCACHABLE;
} else if ((*ptep & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_NC)) {
attr |= PTATTR_WRITE_COMBINED;
}
if (((pgsize == PTL2_SIZE) || (pgsize == PTL3_SIZE))
&& ((*ptep & PMD_TYPE_MASK) == PMD_TYPE_SECT)) {
attr |= PTATTR_LARGEPAGE;
}
return attr;
}
static inline void pte_make_null(pte_t *ptep, size_t pgsize)
{
if ((PTL4_SIZE == pgsize && CONFIG_ARM64_PGTABLE_LEVELS > 3) ||
(PTL3_SIZE == pgsize && CONFIG_ARM64_PGTABLE_LEVELS > 2) ||
(PTL2_SIZE == pgsize) ||
(PTL1_SIZE == pgsize)) {
*ptep = PTE_NULL;
}
}
static inline void pte_make_fileoff(off_t off,
enum ihk_mc_pt_attribute ptattr, size_t pgsize, pte_t *ptep)
{
if ((PTL4_SIZE == pgsize && CONFIG_ARM64_PGTABLE_LEVELS > 3) ||
(PTL3_SIZE == pgsize && CONFIG_ARM64_PGTABLE_LEVELS > 2) ||
(PTL2_SIZE == pgsize) ||
(PTL1_SIZE == pgsize)) {
*ptep = PTE_FILEOFF | off | PTE_TYPE_PAGE;
}
}
#if 0 /* XXX: workaround. cannot use panic() here */
static inline void pte_xchg(pte_t *ptep, pte_t *valp)
{
*valp = xchg(ptep, *valp);
}
#else
#define pte_xchg(p,vp) do { *(vp) = xchg((p), *(vp)); } while (0)
#endif
static inline void pte_clear_dirty(pte_t *ptep, size_t pgsize)
{
int do_clear = pte_is_type_page(ptep, pgsize);
if (do_clear) {
*ptep = *ptep & ~PTE_DIRTY;
}
}
static inline void pte_set_dirty(pte_t *ptep, size_t pgsize)
{
int do_set = pte_is_type_page(ptep, pgsize);
if (do_set) {
*ptep |= PTE_DIRTY;
}
}
struct page_table;
void set_pte(pte_t *ppte, unsigned long phys, enum ihk_mc_pt_attribute attr);
pte_t *get_pte(struct page_table *pt, void *virt, enum ihk_mc_pt_attribute attr);
struct page_table *get_init_page_table(void);
void *early_alloc_pages(int nr_pages);
void *get_last_early_heap(void);
void flush_tlb(void);
void flush_tlb_single(unsigned long addr);
void *map_fixed_area(unsigned long phys, unsigned long size, int uncachable);
void set_address_space_id(struct page_table *pt, int asid);
int get_address_space_id(const struct page_table *pt);
typedef pte_t translation_table_t;
void set_translation_table(struct page_table *pt, translation_table_t* tt);
translation_table_t* get_translation_table(const struct page_table *pt);
translation_table_t* get_translation_table_as_paddr(const struct page_table *pt);
extern unsigned long ap_trampoline;
//#define AP_TRAMPOLINE 0x10000
#define AP_TRAMPOLINE_SIZE 0x2000
/* Local is cachable */
#define IHK_IKC_QUEUE_PT_ATTR (PTATTR_NO_EXECUTE | PTATTR_WRITABLE)
#endif /* !__ASSEMBLY__ */
#endif /* !__HEADER_ARM64_COMMON_ARCH_MEMORY_H */

View File

@ -0,0 +1,72 @@
/* arch-perfctr.h COPYRIGHT FUJITSU LIMITED 2016-2017 */
#ifndef __ARCH_PERFCTR_H__
#define __ARCH_PERFCTR_H__
#include <ihk/types.h>
#include <ihk/cpu.h>
/* @ref.impl arch/arm64/include/asm/pmu.h */
struct arm_pmu {
struct ihk_mc_interrupt_handler* handler;
uint32_t (*read_counter)(int);
void (*write_counter)(int, uint32_t);
void (*reset)(void*);
int (*enable_pmu)(void);
void (*disable_pmu)(void);
int (*enable_counter)(int);
int (*disable_counter)(int);
int (*enable_intens)(int);
int (*disable_intens)(int);
int (*set_event_filter)(unsigned long*, int);
void (*write_evtype)(int, uint32_t);
int (*get_event_idx)(int, unsigned long);
int (*map_event)(uint32_t, uint64_t);
int num_events;
};
static inline const struct arm_pmu* get_cpu_pmu(void)
{
extern struct arm_pmu cpu_pmu;
return &cpu_pmu;
}
int arm64_init_perfctr(void);
int arm64_enable_pmu(void);
void arm64_disable_pmu(void);
int armv8pmu_init(struct arm_pmu* cpu_pmu);
/* TODO[PMU]: 共通部に定義があっても良い。今後の動向を見てここの定義を削除する */
/*
* Generalized hardware cache events:
*
* { L1-D, L1-I, LLC, ITLB, DTLB, BPU, NODE } x
* { read, write, prefetch } x
* { accesses, misses }
*/
enum perf_hw_cache_id {
PERF_COUNT_HW_CACHE_L1D = 0,
PERF_COUNT_HW_CACHE_L1I = 1,
PERF_COUNT_HW_CACHE_LL = 2,
PERF_COUNT_HW_CACHE_DTLB = 3,
PERF_COUNT_HW_CACHE_ITLB = 4,
PERF_COUNT_HW_CACHE_BPU = 5,
PERF_COUNT_HW_CACHE_NODE = 6,
PERF_COUNT_HW_CACHE_MAX, /* non-ABI */
};
enum perf_hw_cache_op_id {
PERF_COUNT_HW_CACHE_OP_READ = 0,
PERF_COUNT_HW_CACHE_OP_WRITE = 1,
PERF_COUNT_HW_CACHE_OP_PREFETCH = 2,
PERF_COUNT_HW_CACHE_OP_MAX, /* non-ABI */
};
enum perf_hw_cache_op_result_id {
PERF_COUNT_HW_CACHE_RESULT_ACCESS = 0,
PERF_COUNT_HW_CACHE_RESULT_MISS = 1,
PERF_COUNT_HW_CACHE_RESULT_MAX, /* non-ABI */
};
#endif

View File

@ -0,0 +1,13 @@
/* arch-string.h COPYRIGHT FUJITSU LIMITED 2016-2017 */
#ifndef __HEADER_ARM64_COMMON_ARCH_STRING_H
#define __HEADER_ARM64_COMMON_ARCH_STRING_H
#define ARCH_FAST_MEMCPY
extern void *__inline_memcpy(void *to, const void *from, size_t t);
#define ARCH_FAST_MEMSET
extern void *__inline_memset(void *s, unsigned long c, size_t count);
#endif /* __HEADER_ARM64_COMMON_ARCH_TIMER_H */

View File

@ -0,0 +1,14 @@
/* arch-timer.h COPYRIGHT FUJITSU LIMITED 2016 */
#ifndef __HEADER_ARM64_COMMON_ARCH_TIMER_H
#define __HEADER_ARM64_COMMON_ARCH_TIMER_H
/* @ref.impl include/clocksource/arm_arch_timer.h */
#define ARCH_TIMER_USR_PCT_ACCESS_EN (1 << 0) /* physical counter */
#define ARCH_TIMER_USR_VCT_ACCESS_EN (1 << 1) /* virtual counter */
#define ARCH_TIMER_VIRT_EVT_EN (1 << 2)
#define ARCH_TIMER_EVT_TRIGGER_SHIFT (4)
#define ARCH_TIMER_EVT_TRIGGER_MASK (0xF << ARCH_TIMER_EVT_TRIGGER_SHIFT)
#define ARCH_TIMER_USR_VT_ACCESS_EN (1 << 8) /* virtual timer registers */
#define ARCH_TIMER_USR_PT_ACCESS_EN (1 << 9) /* physical timer registers */
#endif /* __HEADER_ARM64_COMMON_ARCH_TIMER_H */

View File

@ -0,0 +1,7 @@
/* auxvec.h COPYRIGHT FUJITSU LIMITED 2016 */
#ifndef __HEADER_ARM64_ARCH_AUXVEC_H
#define __HEADER_ARM64_ARCH_AUXVEC_H
#define AT_SYSINFO_EHDR 33
#endif /* __HEADER_ARM64_ARCH_AUXVEC_H */

View File

@ -0,0 +1,105 @@
/* cpu.h COPYRIGHT FUJITSU LIMITED 2016-2017 */
#ifndef __HEADER_ARM64_ARCH_CPU_H
#define __HEADER_ARM64_ARCH_CPU_H
#ifndef __ASSEMBLY__
#define sev() asm volatile("sev" : : : "memory")
#define wfe() asm volatile("wfe" : : : "memory")
#define wfi() asm volatile("wfi" : : : "memory")
#define isb() asm volatile("isb" : : : "memory")
#define dmb(opt) asm volatile("dmb " #opt : : : "memory")
#define dsb(opt) asm volatile("dsb " #opt : : : "memory")
#define mb() dsb(sy)
#define rmb() dsb(ld)
#define wmb() dsb(st)
#define dma_rmb() dmb(oshld)
#define dma_wmb() dmb(oshst)
//#ifndef CONFIG_SMP
//#else
#define smp_mb() dmb(ish)
#define smp_rmb() dmb(ishld)
#define smp_wmb() dmb(ishst)
#define arch_barrier() smp_mb()
#define smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
switch (sizeof(*p)) { \
case 4: \
asm volatile ("stlr %w1, %0" \
: "=Q" (*p) : "r" (v) : "memory"); \
break; \
case 8: \
asm volatile ("stlr %1, %0" \
: "=Q" (*p) : "r" (v) : "memory"); \
break; \
} \
} while (0)
#define smp_load_acquire(p) \
({ \
typeof(*p) ___p1; \
compiletime_assert_atomic_type(*p); \
switch (sizeof(*p)) { \
case 4: \
asm volatile ("ldar %w0, %1" \
: "=r" (___p1) : "Q" (*p) : "memory"); \
break; \
case 8: \
asm volatile ("ldar %0, %1" \
: "=r" (___p1) : "Q" (*p) : "memory"); \
break; \
} \
___p1; \
})
//#endif /*CONFIG_SMP*/
#define read_barrier_depends() do { } while(0)
#define smp_read_barrier_depends() do { } while(0)
#define set_mb(var, value) do { var = value; smp_mb(); } while (0)
#define nop() asm volatile("nop");
#define smp_mb__before_atomic() smp_mb()
#define smp_mb__after_atomic() smp_mb()
/* @ref.impl linux-linaro/arch/arm64/include/asm/arch_timer.h::arch_counter_get_cntvct */
#define read_tsc() \
({ \
unsigned long cval; \
isb(); \
asm volatile("mrs %0, cntvct_el0" : "=r" (cval)); \
cval; \
})
void init_tod_data(void);
#if defined(CONFIG_HAS_NMI)
static inline void cpu_enable_nmi(void)
{
asm volatile("msr daifclr, #2": : : "memory");
}
static inline void cpu_disable_nmi(void)
{
asm volatile("msr daifset, #2": : : "memory");
}
#else/*defined(CONFIG_HAS_NMI)*/
static inline void cpu_enable_nmi(void)
{
}
static inline void cpu_disable_nmi(void)
{
}
#endif/*defined(CONFIG_HAS_NMI)*/
#endif /* __ASSEMBLY__ */
#endif /* !__HEADER_ARM64_ARCH_CPU_H */

View File

@ -0,0 +1,17 @@
/* mm.h COPYRIGHT FUJITSU LIMITED 2016 */
#ifndef __HEADER_ARM64_ARCH_MM_H
#define __HEADER_ARM64_ARCH_MM_H
struct process_vm;
static inline void
flush_nfo_tlb()
{
}
static inline void
flush_nfo_tlb_mm(struct process_vm *vm)
{
}
#endif /* __HEADER_ARM64_ARCH_MM_H */

View File

@ -0,0 +1,37 @@
/* mman.h COPYRIGHT FUJITSU LIMITED 2015-2016 */
/* @ref.impl linux-linaro/include/uapi/asm-generic/mman.h */
#ifndef __HEADER_ARM64_ARCH_MMAN_H
#define __HEADER_ARM64_ARCH_MMAN_H
#include <arch-memory.h>
/*
* mapping flags
*/
#define MAP_GROWSDOWN 0x0100 /* stack-like segment */
#define MAP_DENYWRITE 0x0800 /* ETXTBSY */
#define MAP_EXECUTABLE 0x1000 /* mark it as an executable */
#define MAP_LOCKED 0x2000 /* pages are locked */
#define MAP_NORESERVE 0x4000 /* don't check for reservations */
#define MAP_POPULATE 0x8000 /* populate (prefault) pagetables */
#define MAP_NONBLOCK 0x10000 /* do not block on IO */
#define MAP_STACK 0x20000 /* give out an address that is best suited for process/thread stacks */
#define MAP_HUGETLB 0x40000 /* create a huge page mapping */
/* Bits [26:31] are reserved, see mman-common.h for MAP_HUGETLB usage */
#define MAP_HUGE_SHIFT 26
#if FIRST_LEVEL_BLOCK_SUPPORT
# define MAP_HUGE_FIRST_BLOCK (__PTL3_SHIFT << MAP_HUGE_SHIFT)
#else
# define MAP_HUGE_FIRST_BLOCK -1 /* not supported */
#endif
#define MAP_HUGE_SECOND_BLOCK (__PTL2_SHIFT << MAP_HUGE_SHIFT)
/*
* for mlockall()
*/
#define MCL_CURRENT 1 /* lock all current mappings */
#define MCL_FUTURE 2 /* lock all future mappings */
#endif /* __HEADER_ARM64_ARCH_MMAN_H */

View File

@ -0,0 +1,60 @@
#ifndef ARCH_RUSAGE_H_INCLUDED
#define ARCH_RUSAGE_H_INCLUDED
#define DEBUG_RUSAGE
#define IHK_OS_PGSIZE_4KB 0
#define IHK_OS_PGSIZE_2MB 1
#define IHK_OS_PGSIZE_1GB 2
extern struct ihk_os_monitor *monitor;
extern int sprintf(char * buf, const char *fmt, ...);
#define DEBUG_ARCH_RUSAGE
#ifdef DEBUG_ARCH_RUSAGE
#define dprintf(...) \
do { \
char msg[1024]; \
sprintf(msg, __VA_ARGS__); \
kprintf("%s,%s", __FUNCTION__, msg); \
} while (0);
#define eprintf(...) \
do { \
char msg[1024]; \
sprintf(msg, __VA_ARGS__); \
kprintf("%s,%s", __FUNCTION__, msg); \
} while (0);
#else
#define dprintf(...) do { } while (0)
#define eprintf(...) \
do { \
char msg[1024]; \
sprintf(msg, __VA_ARGS__); \
kprintf("%s,%s", __FUNCTION__, msg); \
} while (0);
#endif
static inline int rusage_pgsize_to_pgtype(size_t pgsize)
{
int ret = IHK_OS_PGSIZE_4KB;
#if 0 /* postk-TODO */
switch (pgsize) {
case PTL1_SIZE:
ret = IHK_OS_PGSIZE_4KB;
break;
case PTL2_SIZE:
ret = IHK_OS_PGSIZE_2MB;
break;
case PTL3_SIZE:
ret = IHK_OS_PGSIZE_1GB;
break;
default:
eprintf("unknown pgsize=%ld\n", pgsize);
break;
}
#endif
return ret;
}
#endif /* !defined(ARCH_RUSAGE_H_INCLUDED) */

View File

@ -0,0 +1,41 @@
/* shm.h COPYRIGHT FUJITSU LIMITED 2015-2016 */
#ifndef __HEADER_ARM64_ARCH_SHM_H
#define __HEADER_ARM64_ARCH_SHM_H
#include <arch-memory.h>
/* shmflg */
#define SHM_HUGE_SHIFT 26
#if FIRST_LEVEL_BLOCK_SUPPORT
# define SHM_HUGE_FIRST_BLOCK (__PTL3_SHIFT << SHM_HUGE_SHIFT)
#else
# define SHM_HUGE_FIRST_BLOCK -1 /* not supported */
#endif
#define SHM_HUGE_SECOND_BLOCK (__PTL2_SHIFT << SHM_HUGE_SHIFT)
struct ipc_perm {
key_t key;
uid_t uid;
gid_t gid;
uid_t cuid;
gid_t cgid;
uint16_t mode;
uint8_t padding[2];
uint16_t seq;
uint8_t padding2[22];
};
struct shmid_ds {
struct ipc_perm shm_perm;
size_t shm_segsz;
time_t shm_atime;
time_t shm_dtime;
time_t shm_ctime;
pid_t shm_cpid;
pid_t shm_lpid;
uint64_t shm_nattch;
uint8_t padding[12];
int init_pgshift;
};
#endif /* __HEADER_ARM64_ARCH_SHM_H */

View File

@ -0,0 +1,34 @@
#ifndef ARCH_RUSAGE_H_INCLUDED
#define ARCH_RUSAGE_H_INCLUDED
#include <arch-memory.h>
//#define DEBUG_RUSAGE
extern struct rusage_global *rusage;
#define IHK_OS_PGSIZE_4KB 0
#define IHK_OS_PGSIZE_16KB 1
#define IHK_OS_PGSIZE_64KB 2
static inline int rusage_pgsize_to_pgtype(size_t pgsize)
{
int ret = IHK_OS_PGSIZE_4KB;
switch (pgsize) {
case __PTL1_SIZE:
ret = IHK_OS_PGSIZE_4KB;
break;
case __PTL2_SIZE:
ret = IHK_OS_PGSIZE_16KB;
break;
case __PTL3_SIZE:
ret = IHK_OS_PGSIZE_64KB;
break;
default:
kprintf("%s: Error: Unknown pgsize=%ld\n", __FUNCTION__, pgsize);
break;
}
return ret;
}
#endif /* !defined(ARCH_RUSAGE_H_INCLUDED) */

View File

@ -0,0 +1,106 @@
/* arm-gic-v2.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
/*
* include/linux/irqchip/arm-gic.h
*
* Copyright (C) 2002 ARM Limited, All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __LINUX_IRQCHIP_ARM_GIC_H
#define __LINUX_IRQCHIP_ARM_GIC_H
/* check config */
#if defined(CONFIG_HAS_NMI) && !defined(CONFIG_ARM_GIC_V3)
# error GICv2 is not support NMI
#endif
/* @ref.impl include/linux/irqchip/arm-gic.h */
#define GIC_CPU_CTRL 0x00
#define GIC_CPU_PRIMASK 0x04
#define GIC_CPU_BINPOINT 0x08
#define GIC_CPU_INTACK 0x0c
#define GIC_CPU_EOI 0x10
#define GIC_CPU_RUNNINGPRI 0x14
#define GIC_CPU_HIGHPRI 0x18
#define GIC_CPU_ALIAS_BINPOINT 0x1c
#define GIC_CPU_ACTIVEPRIO 0xd0
#define GIC_CPU_IDENT 0xfc
#define GICC_ENABLE 0x1
#define GICC_INT_PRI_THRESHOLD 0xf0
#define GICC_IAR_INT_ID_MASK 0x3ff
#define GICC_INT_SPURIOUS 1023
#define GICC_DIS_BYPASS_MASK 0x1e0
#define GIC_DIST_CTRL 0x000
#define GIC_DIST_CTR 0x004
#define GIC_DIST_IGROUP 0x080
#define GIC_DIST_ENABLE_SET 0x100
#define GIC_DIST_ENABLE_CLEAR 0x180
#define GIC_DIST_PENDING_SET 0x200
#define GIC_DIST_PENDING_CLEAR 0x280
#define GIC_DIST_ACTIVE_SET 0x300
#define GIC_DIST_ACTIVE_CLEAR 0x380
#define GIC_DIST_PRI 0x400
#define GIC_DIST_TARGET 0x800
#define GIC_DIST_CONFIG 0xc00
#define GIC_DIST_SOFTINT 0xf00
#define GIC_DIST_SGI_PENDING_CLEAR 0xf10
#define GIC_DIST_SGI_PENDING_SET 0xf20
#define GICD_ENABLE 0x1
#define GICD_DISABLE 0x0
#define GICD_INT_ACTLOW_LVLTRIG 0x0
#define GICD_INT_EN_CLR_X32 0xffffffff
#define GICD_INT_EN_SET_SGI 0x0000ffff
#define GICD_INT_EN_CLR_PPI 0xffff0000
#ifdef CONFIG_HAS_NMI
#define GICD_INT_NMI_PRI 0x40
#define GICD_INT_DEF_PRI 0xc0
#else
#define GICD_INT_DEF_PRI 0xa0
#endif
#define GICD_INT_DEF_PRI_X4 ((GICD_INT_DEF_PRI << 24) |\
(GICD_INT_DEF_PRI << 16) |\
(GICD_INT_DEF_PRI << 8) |\
GICD_INT_DEF_PRI)
#define GICH_HCR 0x0
#define GICH_VTR 0x4
#define GICH_VMCR 0x8
#define GICH_MISR 0x10
#define GICH_EISR0 0x20
#define GICH_EISR1 0x24
#define GICH_ELRSR0 0x30
#define GICH_ELRSR1 0x34
#define GICH_APR 0xf0
#define GICH_LR0 0x100
#define GICH_HCR_EN (1 << 0)
#define GICH_HCR_UIE (1 << 1)
#define GICH_LR_VIRTUALID (0x3ff << 0)
#define GICH_LR_PHYSID_CPUID_SHIFT (10)
#define GICH_LR_PHYSID_CPUID (7 << GICH_LR_PHYSID_CPUID_SHIFT)
#define GICH_LR_STATE (3 << 28)
#define GICH_LR_PENDING_BIT (1 << 28)
#define GICH_LR_ACTIVE_BIT (1 << 29)
#define GICH_LR_EOI (1 << 19)
#define GICH_VMCR_CTRL_SHIFT 0
#define GICH_VMCR_CTRL_MASK (0x21f << GICH_VMCR_CTRL_SHIFT)
#define GICH_VMCR_PRIMASK_SHIFT 27
#define GICH_VMCR_PRIMASK_MASK (0x1f << GICH_VMCR_PRIMASK_SHIFT)
#define GICH_VMCR_BINPOINT_SHIFT 21
#define GICH_VMCR_BINPOINT_MASK (0x7 << GICH_VMCR_BINPOINT_SHIFT)
#define GICH_VMCR_ALIAS_BINPOINT_SHIFT 18
#define GICH_VMCR_ALIAS_BINPOINT_MASK (0x7 << GICH_VMCR_ALIAS_BINPOINT_SHIFT)
#define GICH_MISR_EOI (1 << 0)
#define GICH_MISR_U (1 << 1)
#endif /* __LINUX_IRQCHIP_ARM_GIC_H */

View File

@ -0,0 +1,391 @@
/* arm-gic-v3.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
/*
* Copyright (C) 2013, 2014 ARM Limited, All Rights Reserved.
* Author: Marc Zyngier <marc.zyngier@arm.com>
*
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __LINUX_IRQCHIP_ARM_GIC_V3_H
#define __LINUX_IRQCHIP_ARM_GIC_V3_H
/* @ref.impl include/linux/irqchip/arm-gic-v3.h */
#include <sysreg.h>
/*
* Distributor registers. We assume we're running non-secure, with ARE
* being set. Secure-only and non-ARE registers are not described.
*/
#define GICD_CTLR 0x0000
#define GICD_TYPER 0x0004
#define GICD_IIDR 0x0008
#define GICD_STATUSR 0x0010
#define GICD_SETSPI_NSR 0x0040
#define GICD_CLRSPI_NSR 0x0048
#define GICD_SETSPI_SR 0x0050
#define GICD_CLRSPI_SR 0x0058
#define GICD_SEIR 0x0068
#define GICD_IGROUPR 0x0080
#define GICD_ISENABLER 0x0100
#define GICD_ICENABLER 0x0180
#define GICD_ISPENDR 0x0200
#define GICD_ICPENDR 0x0280
#define GICD_ISACTIVER 0x0300
#define GICD_ICACTIVER 0x0380
#define GICD_IPRIORITYR 0x0400
#define GICD_ICFGR 0x0C00
#define GICD_IGRPMODR 0x0D00
#define GICD_NSACR 0x0E00
#define GICD_IROUTER 0x6000
#define GICD_IDREGS 0xFFD0
#define GICD_PIDR2 0xFFE8
/*
* Those registers are actually from GICv2, but the spec demands that they
* are implemented as RES0 if ARE is 1 (which we do in KVM's emulated GICv3).
*/
#define GICD_ITARGETSR 0x0800
#define GICD_SGIR 0x0F00
#define GICD_CPENDSGIR 0x0F10
#define GICD_SPENDSGIR 0x0F20
#define GICD_CTLR_RWP (1U << 31)
#define GICD_CTLR_DS (1U << 6)
#define GICD_CTLR_ARE_NS (1U << 4)
#define GICD_CTLR_ENABLE_G1A (1U << 1)
#define GICD_CTLR_ENABLE_G1 (1U << 0)
/*
* In systems with a single security state (what we emulate in KVM)
* the meaning of the interrupt group enable bits is slightly different
*/
#define GICD_CTLR_ENABLE_SS_G1 (1U << 1)
#define GICD_CTLR_ENABLE_SS_G0 (1U << 0)
#define GICD_TYPER_LPIS (1U << 17)
#define GICD_TYPER_MBIS (1U << 16)
#define GICD_TYPER_ID_BITS(typer) ((((typer) >> 19) & 0x1f) + 1)
#define GICD_TYPER_IRQS(typer) ((((typer) & 0x1f) + 1) * 32)
#define GICD_TYPER_LPIS (1U << 17)
#define GICD_IROUTER_SPI_MODE_ONE (0U << 31)
#define GICD_IROUTER_SPI_MODE_ANY (1U << 31)
#define GIC_PIDR2_ARCH_MASK 0xf0
#define GIC_PIDR2_ARCH_GICv3 0x30
#define GIC_PIDR2_ARCH_GICv4 0x40
#define GIC_V3_DIST_SIZE 0x10000
/*
* Re-Distributor registers, offsets from RD_base
*/
#define GICR_CTLR GICD_CTLR
#define GICR_IIDR 0x0004
#define GICR_TYPER 0x0008
#define GICR_STATUSR GICD_STATUSR
#define GICR_WAKER 0x0014
#define GICR_SETLPIR 0x0040
#define GICR_CLRLPIR 0x0048
#define GICR_SEIR GICD_SEIR
#define GICR_PROPBASER 0x0070
#define GICR_PENDBASER 0x0078
#define GICR_INVLPIR 0x00A0
#define GICR_INVALLR 0x00B0
#define GICR_SYNCR 0x00C0
#define GICR_MOVLPIR 0x0100
#define GICR_MOVALLR 0x0110
#define GICR_IDREGS GICD_IDREGS
#define GICR_PIDR2 GICD_PIDR2
#define GICR_CTLR_ENABLE_LPIS (1UL << 0)
#define GICR_TYPER_CPU_NUMBER(r) (((r) >> 8) & 0xffff)
#define GICR_WAKER_ProcessorSleep (1U << 1)
#define GICR_WAKER_ChildrenAsleep (1U << 2)
#define GICR_PROPBASER_NonShareable (0U << 10)
#define GICR_PROPBASER_InnerShareable (1U << 10)
#define GICR_PROPBASER_OuterShareable (2U << 10)
#define GICR_PROPBASER_SHAREABILITY_MASK (3UL << 10)
#define GICR_PROPBASER_nCnB (0U << 7)
#define GICR_PROPBASER_nC (1U << 7)
#define GICR_PROPBASER_RaWt (2U << 7)
#define GICR_PROPBASER_RaWb (3U << 7)
#define GICR_PROPBASER_WaWt (4U << 7)
#define GICR_PROPBASER_WaWb (5U << 7)
#define GICR_PROPBASER_RaWaWt (6U << 7)
#define GICR_PROPBASER_RaWaWb (7U << 7)
#define GICR_PROPBASER_CACHEABILITY_MASK (7U << 7)
#define GICR_PROPBASER_IDBITS_MASK (0x1f)
#define GICR_PENDBASER_NonShareable (0U << 10)
#define GICR_PENDBASER_InnerShareable (1U << 10)
#define GICR_PENDBASER_OuterShareable (2U << 10)
#define GICR_PENDBASER_SHAREABILITY_MASK (3UL << 10)
#define GICR_PENDBASER_nCnB (0U << 7)
#define GICR_PENDBASER_nC (1U << 7)
#define GICR_PENDBASER_RaWt (2U << 7)
#define GICR_PENDBASER_RaWb (3U << 7)
#define GICR_PENDBASER_WaWt (4U << 7)
#define GICR_PENDBASER_WaWb (5U << 7)
#define GICR_PENDBASER_RaWaWt (6U << 7)
#define GICR_PENDBASER_RaWaWb (7U << 7)
#define GICR_PENDBASER_CACHEABILITY_MASK (7U << 7)
/*
* Re-Distributor registers, offsets from SGI_base
*/
#define GICR_IGROUPR0 GICD_IGROUPR
#define GICR_ISENABLER0 GICD_ISENABLER
#define GICR_ICENABLER0 GICD_ICENABLER
#define GICR_ISPENDR0 GICD_ISPENDR
#define GICR_ICPENDR0 GICD_ICPENDR
#define GICR_ISACTIVER0 GICD_ISACTIVER
#define GICR_ICACTIVER0 GICD_ICACTIVER
#define GICR_IPRIORITYR0 GICD_IPRIORITYR
#define GICR_ICFGR0 GICD_ICFGR
#define GICR_IGRPMODR0 GICD_IGRPMODR
#define GICR_NSACR GICD_NSACR
#define GICR_TYPER_PLPIS (1U << 0)
#define GICR_TYPER_VLPIS (1U << 1)
#define GICR_TYPER_LAST (1U << 4)
#define GIC_V3_REDIST_SIZE 0x20000
#define LPI_PROP_GROUP1 (1 << 1)
#define LPI_PROP_ENABLED (1 << 0)
/*
* ITS registers, offsets from ITS_base
*/
#define GITS_CTLR 0x0000
#define GITS_IIDR 0x0004
#define GITS_TYPER 0x0008
#define GITS_CBASER 0x0080
#define GITS_CWRITER 0x0088
#define GITS_CREADR 0x0090
#define GITS_BASER 0x0100
#define GITS_PIDR2 GICR_PIDR2
#define GITS_TRANSLATER 0x10040
#define GITS_CTLR_ENABLE (1U << 0)
#define GITS_CTLR_QUIESCENT (1U << 31)
#define GITS_TYPER_DEVBITS_SHIFT 13
#define GITS_TYPER_DEVBITS(r) ((((r) >> GITS_TYPER_DEVBITS_SHIFT) & 0x1f) + 1)
#define GITS_TYPER_PTA (1UL << 19)
#define GITS_CBASER_VALID (1UL << 63)
#define GITS_CBASER_nCnB (0UL << 59)
#define GITS_CBASER_nC (1UL << 59)
#define GITS_CBASER_RaWt (2UL << 59)
#define GITS_CBASER_RaWb (3UL << 59)
#define GITS_CBASER_WaWt (4UL << 59)
#define GITS_CBASER_WaWb (5UL << 59)
#define GITS_CBASER_RaWaWt (6UL << 59)
#define GITS_CBASER_RaWaWb (7UL << 59)
#define GITS_CBASER_CACHEABILITY_MASK (7UL << 59)
#define GITS_CBASER_NonShareable (0UL << 10)
#define GITS_CBASER_InnerShareable (1UL << 10)
#define GITS_CBASER_OuterShareable (2UL << 10)
#define GITS_CBASER_SHAREABILITY_MASK (3UL << 10)
#define GITS_BASER_NR_REGS 8
#define GITS_BASER_VALID (1UL << 63)
#define GITS_BASER_nCnB (0UL << 59)
#define GITS_BASER_nC (1UL << 59)
#define GITS_BASER_RaWt (2UL << 59)
#define GITS_BASER_RaWb (3UL << 59)
#define GITS_BASER_WaWt (4UL << 59)
#define GITS_BASER_WaWb (5UL << 59)
#define GITS_BASER_RaWaWt (6UL << 59)
#define GITS_BASER_RaWaWb (7UL << 59)
#define GITS_BASER_CACHEABILITY_MASK (7UL << 59)
#define GITS_BASER_TYPE_SHIFT (56)
#define GITS_BASER_TYPE(r) (((r) >> GITS_BASER_TYPE_SHIFT) & 7)
#define GITS_BASER_ENTRY_SIZE_SHIFT (48)
#define GITS_BASER_ENTRY_SIZE(r) ((((r) >> GITS_BASER_ENTRY_SIZE_SHIFT) & 0xff) + 1)
#define GITS_BASER_NonShareable (0UL << 10)
#define GITS_BASER_InnerShareable (1UL << 10)
#define GITS_BASER_OuterShareable (2UL << 10)
#define GITS_BASER_SHAREABILITY_SHIFT (10)
#define GITS_BASER_SHAREABILITY_MASK (3UL << GITS_BASER_SHAREABILITY_SHIFT)
#define GITS_BASER_PAGE_SIZE_SHIFT (8)
#define GITS_BASER_PAGE_SIZE_4K (0UL << GITS_BASER_PAGE_SIZE_SHIFT)
#define GITS_BASER_PAGE_SIZE_16K (1UL << GITS_BASER_PAGE_SIZE_SHIFT)
#define GITS_BASER_PAGE_SIZE_64K (2UL << GITS_BASER_PAGE_SIZE_SHIFT)
#define GITS_BASER_PAGE_SIZE_MASK (3UL << GITS_BASER_PAGE_SIZE_SHIFT)
#define GITS_BASER_PAGES_MAX 256
#define GITS_BASER_TYPE_NONE 0
#define GITS_BASER_TYPE_DEVICE 1
#define GITS_BASER_TYPE_VCPU 2
#define GITS_BASER_TYPE_CPU 3
#define GITS_BASER_TYPE_COLLECTION 4
#define GITS_BASER_TYPE_RESERVED5 5
#define GITS_BASER_TYPE_RESERVED6 6
#define GITS_BASER_TYPE_RESERVED7 7
/*
* ITS commands
*/
#define GITS_CMD_MAPD 0x08
#define GITS_CMD_MAPC 0x09
#define GITS_CMD_MAPVI 0x0a
#define GITS_CMD_MOVI 0x01
#define GITS_CMD_DISCARD 0x0f
#define GITS_CMD_INV 0x0c
#define GITS_CMD_MOVALL 0x0e
#define GITS_CMD_INVALL 0x0d
#define GITS_CMD_INT 0x03
#define GITS_CMD_CLEAR 0x04
#define GITS_CMD_SYNC 0x05
/*
* CPU interface registers
*/
#define ICC_CTLR_EL1_EOImode_drop_dir (0U << 1)
#define ICC_CTLR_EL1_EOImode_drop (1U << 1)
#define ICC_SRE_EL1_SRE (1U << 0)
/*
* Hypervisor interface registers (SRE only)
*/
#define ICH_LR_VIRTUAL_ID_MASK ((1UL << 32) - 1)
#define ICH_LR_EOI (1UL << 41)
#define ICH_LR_GROUP (1UL << 60)
#define ICH_LR_STATE (3UL << 62)
#define ICH_LR_PENDING_BIT (1UL << 62)
#define ICH_LR_ACTIVE_BIT (1UL << 63)
#define ICH_MISR_EOI (1 << 0)
#define ICH_MISR_U (1 << 1)
#define ICH_HCR_EN (1 << 0)
#define ICH_HCR_UIE (1 << 1)
#define ICH_VMCR_CTLR_SHIFT 0
#define ICH_VMCR_CTLR_MASK (0x21f << ICH_VMCR_CTLR_SHIFT)
#define ICH_VMCR_BPR1_SHIFT 18
#define ICH_VMCR_BPR1_MASK (7 << ICH_VMCR_BPR1_SHIFT)
#define ICH_VMCR_BPR0_SHIFT 21
#define ICH_VMCR_BPR0_MASK (7 << ICH_VMCR_BPR0_SHIFT)
#define ICH_VMCR_PMR_SHIFT 24
#define ICH_VMCR_PMR_MASK (0xffUL << ICH_VMCR_PMR_SHIFT)
#define ICC_EOIR1_EL1 sys_reg(3, 0, 12, 12, 1)
#define ICC_IAR1_EL1 sys_reg(3, 0, 12, 12, 0)
#define ICC_SGI1R_EL1 sys_reg(3, 0, 12, 11, 5)
#define ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0)
#define ICC_CTLR_EL1 sys_reg(3, 0, 12, 12, 4)
#define ICC_SRE_EL1 sys_reg(3, 0, 12, 12, 5)
#define ICC_GRPEN1_EL1 sys_reg(3, 0, 12, 12, 7)
#define ICC_BPR1_EL1 sys_reg(3, 0, 12, 12, 3)
#define ICC_IAR1_EL1_SPURIOUS 0x3ff
#define ICC_SRE_EL2 sys_reg(3, 4, 12, 9, 5)
#define ICC_SRE_EL2_SRE (1 << 0)
#define ICC_SRE_EL2_ENABLE (1 << 3)
#define ICC_SGI1R_TARGET_LIST_SHIFT 0
#define ICC_SGI1R_TARGET_LIST_MASK (0xffff << ICC_SGI1R_TARGET_LIST_SHIFT)
#define ICC_SGI1R_AFFINITY_1_SHIFT 16
#define ICC_SGI1R_AFFINITY_1_MASK (0xff << ICC_SGI1R_AFFINITY_1_SHIFT)
#define ICC_SGI1R_SGI_ID_SHIFT 24
#define ICC_SGI1R_SGI_ID_MASK (0xff << ICC_SGI1R_SGI_ID_SHIFT)
#define ICC_SGI1R_AFFINITY_2_SHIFT 32
#define ICC_SGI1R_AFFINITY_2_MASK (0xffULL << ICC_SGI1R_AFFINITY_1_SHIFT)
#define ICC_SGI1R_IRQ_ROUTING_MODE_BIT 40
#define ICC_SGI1R_AFFINITY_3_SHIFT 48
#define ICC_SGI1R_AFFINITY_3_MASK (0xffULL << ICC_SGI1R_AFFINITY_1_SHIFT)
#ifdef CONFIG_HAS_NMI
/* PMR values used to mask/unmask interrupts */
#define ICC_PMR_EL1_G_SHIFT 6
#define ICC_PMR_EL1_G_BIT (1 << ICC_PMR_EL1_G_SHIFT)
#define ICC_PMR_EL1_UNMASKED 0xf0
#define ICC_PMR_EL1_MASKED (ICC_PMR_EL1_UNMASKED ^ ICC_PMR_EL1_G_BIT)
/*
* This is the GIC interrupt mask bit. It is not actually part of the
* PSR and so does not appear in the user API, we are simply using some
* reserved bits in the PSR to store some state from the interrupt
* controller. The context save/restore functions will extract the
* ICC_PMR_EL1_G_BIT and save it as the PSR_G_BIT.
*/
#define PSR_G_BIT 0x00400000
#define PSR_G_SHIFT 22
#define PSR_G_PMR_G_SHIFT (PSR_G_SHIFT - ICC_PMR_EL1_G_SHIFT)
#define PSR_I_PMR_G_SHIFT (7 - ICC_PMR_EL1_G_SHIFT)
#endif /* CONFIG_HAS_NMI */
/*
* System register definitions
*/
#define ICH_VSEIR_EL2 sys_reg(3, 4, 12, 9, 4)
#define ICH_HCR_EL2 sys_reg(3, 4, 12, 11, 0)
#define ICH_VTR_EL2 sys_reg(3, 4, 12, 11, 1)
#define ICH_MISR_EL2 sys_reg(3, 4, 12, 11, 2)
#define ICH_EISR_EL2 sys_reg(3, 4, 12, 11, 3)
#define ICH_ELSR_EL2 sys_reg(3, 4, 12, 11, 5)
#define ICH_VMCR_EL2 sys_reg(3, 4, 12, 11, 7)
#define __LR0_EL2(x) sys_reg(3, 4, 12, 12, x)
#define __LR8_EL2(x) sys_reg(3, 4, 12, 13, x)
#define ICH_LR0_EL2 __LR0_EL2(0)
#define ICH_LR1_EL2 __LR0_EL2(1)
#define ICH_LR2_EL2 __LR0_EL2(2)
#define ICH_LR3_EL2 __LR0_EL2(3)
#define ICH_LR4_EL2 __LR0_EL2(4)
#define ICH_LR5_EL2 __LR0_EL2(5)
#define ICH_LR6_EL2 __LR0_EL2(6)
#define ICH_LR7_EL2 __LR0_EL2(7)
#define ICH_LR8_EL2 __LR8_EL2(0)
#define ICH_LR9_EL2 __LR8_EL2(1)
#define ICH_LR10_EL2 __LR8_EL2(2)
#define ICH_LR11_EL2 __LR8_EL2(3)
#define ICH_LR12_EL2 __LR8_EL2(4)
#define ICH_LR13_EL2 __LR8_EL2(5)
#define ICH_LR14_EL2 __LR8_EL2(6)
#define ICH_LR15_EL2 __LR8_EL2(7)
#define __AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x)
#define ICH_AP0R0_EL2 __AP0Rx_EL2(0)
#define ICH_AP0R1_EL2 __AP0Rx_EL2(1)
#define ICH_AP0R2_EL2 __AP0Rx_EL2(2)
#define ICH_AP0R3_EL2 __AP0Rx_EL2(3)
#define __AP1Rx_EL2(x) sys_reg(3, 4, 12, 9, x)
#define ICH_AP1R0_EL2 __AP1Rx_EL2(0)
#define ICH_AP1R1_EL2 __AP1Rx_EL2(1)
#define ICH_AP1R2_EL2 __AP1Rx_EL2(2)
#define ICH_AP1R3_EL2 __AP1Rx_EL2(3)
/**
* @ref.impl host-kernel/include/linux/stringify.h
*/
#define __stringify_1(x...) #x
#define __stringify(x...) __stringify_1(x)
#endif /* __LINUX_IRQCHIP_ARM_GIC_V3_H */

View File

@ -0,0 +1,27 @@
/* asm-offsets.h COPYRIGHT FUJITSU LIMITED 2015-2016 */
#ifndef __HEADER_ARM64_COMMON_ASM_OFFSETS_H
#define __HEADER_ARM64_COMMON_ASM_OFFSETS_H
#define S_X0 0x00 /* offsetof(struct pt_regs, regs[0]) */
#define S_X1 0x08 /* offsetof(struct pt_regs, regs[1]) */
#define S_X2 0x10 /* offsetof(struct pt_regs, regs[2]) */
#define S_X3 0x18 /* offsetof(struct pt_regs, regs[3]) */
#define S_X4 0x20 /* offsetof(struct pt_regs, regs[4]) */
#define S_X5 0x28 /* offsetof(struct pt_regs, regs[5]) */
#define S_X6 0x30 /* offsetof(struct pt_regs, regs[6]) */
#define S_X7 0x38 /* offsetof(struct pt_regs, regs[7]) */
#define S_LR 0xf0 /* offsetof(struct pt_regs, regs[30]) */
#define S_SP 0xf8 /* offsetof(struct pt_regs, sp) */
#define S_PC 0x100 /* offsetof(struct pt_regs, pc) */
#define S_PSTATE 0x108 /* offsetof(struct pt_regs, pstate) */
#define S_ORIG_X0 0x110 /* offsetof(struct pt_regs, orig_x0) */
#define S_SYSCALLNO 0x118 /* offsetof(struct pt_regs, syscallno) */
#define S_FRAME_SIZE 0x120 /* sizeof(struct pt_regs) */
#define CPU_INFO_SETUP 0x10 /* offsetof(struct cpu_info, cpu_setup) */
#define CPU_INFO_SZ 0x18 /* sizeof(struct cpu_info) */
#define TI_FLAGS 0x00 /* offsetof(struct thread_info, flags) */
#define TI_CPU_CONTEXT 0x10 /* offsetof(struct thread_info, cpu_context) */
#endif /* !__HEADER_ARM64_COMMON_ASM_OFFSETS_H */

View File

@ -0,0 +1,149 @@
/* assembler.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
#ifndef __HEADER_ARM64_COMMON_ASSEMBLER_H
#define __HEADER_ARM64_COMMON_ASSEMBLER_H
#include <thread_info.h>
#if defined(CONFIG_HAS_NMI)
#include <arm-gic-v3.h>
#else /* defined(CONFIG_HAS_NMI) */
#include <sysreg.h>
#endif /* defined(CONFIG_HAS_NMI) */
#if defined(CONFIG_HAS_NMI)
/*
* Enable and disable pseudo NMI.
*/
.macro disable_nmi
msr daifset, #2
.endm
.macro enable_nmi
msr daifclr, #2
.endm
/*
* Enable and disable interrupts.
*/
.macro disable_irq, tmp
mov \tmp, #ICC_PMR_EL1_MASKED
msr_s ICC_PMR_EL1, \tmp
.endm
.macro enable_irq, tmp
mov \tmp, #ICC_PMR_EL1_UNMASKED
msr_s ICC_PMR_EL1, \tmp
.endm
#else /* defined(CONFIG_HAS_NMI) */
/*
* Enable and disable pseudo NMI.
*/
.macro disable_nmi
.endm
.macro enable_nmi
.endm
/*
* Enable and disable interrupts.
*/
.macro disable_irq, tmp
msr daifset, #2
.endm
.macro enable_irq, tmp
msr daifclr, #2
.endm
#endif /* defined(CONFIG_HAS_NMI) */
/*
* Enable and disable debug exceptions.
*/
.macro disable_dbg
msr daifset, #8
.endm
.macro enable_dbg
msr daifclr, #8
.endm
.macro disable_step_tsk, flgs, tmp
tbz \flgs, #TIF_SINGLESTEP, 9990f
mrs \tmp, mdscr_el1
bic \tmp, \tmp, #1
msr mdscr_el1, \tmp
isb // Synchronise with enable_dbg
9990:
.endm
.macro enable_step_tsk, flgs, tmp
tbz \flgs, #TIF_SINGLESTEP, 9990f
disable_dbg
mrs \tmp, mdscr_el1
orr \tmp, \tmp, #1
msr mdscr_el1, \tmp
b 9991f
9990:
mrs \tmp, mdscr_el1
bic \tmp, \tmp, #1
msr mdscr_el1, \tmp
isb // Synchronise with enable_dbg
9991:
.endm
/*
* Enable both debug exceptions and interrupts. This is likely to be
* faster than two daifclr operations, since writes to this register
* are self-synchronising.
*/
#if defined(CONFIG_HAS_NMI)
.macro enable_dbg_and_irq, tmp
enable_dbg
enable_irq \tmp
.endm
#else /* defined(CONFIG_HAS_NMI) */
.macro enable_dbg_and_irq, tmp
msr daifclr, #(8 | 2)
.endm
#endif /* defined(CONFIG_HAS_NMI) */
/*
* Register aliases.
*/
lr .req x30 // link register
/*
* Vector entry
*/
.macro ventry label
.align 7
b \label
.endm
/*
* Select code when configured for BE.
*/
//#ifdef CONFIG_CPU_BIG_ENDIAN
//#define CPU_BE(code...) code
//#else
#define CPU_BE(code...)
//#endif
/*
* Select code when configured for LE.
*/
//#ifdef CONFIG_CPU_BIG_ENDIAN
//#define CPU_LE(code...)
//#else
#define CPU_LE(code...) code
//#endif
#define ENDPIPROC(x) \
.globl __pi_##x; \
.type __pi_##x, %function; \
.set __pi_##x, x; \
.size __pi_##x, . - x; \
ENDPROC(x)
#endif /* !__HEADER_ARM64_COMMON_ASSEMBLER_H */

View File

@ -0,0 +1,7 @@
/* cache.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef __HEADER_ARM64_COMMON_CACHE_H
#define __HEADER_ARM64_COMMON_CACHE_H
#define L1_CACHE_SHIFT 6
#endif /* !__HEADER_ARM64_COMMON_CACHE_H */

View File

@ -0,0 +1,32 @@
/* cas.h COPYRIGHT FUJITSU LIMITED 2015-2016 */
#ifndef __HEADER_ARM64_COMMON_CAS_H
#define __HEADER_ARM64_COMMON_CAS_H
#include <arch/cpu.h>
/* @ref.impl arch/arm64/include/asm/cmpxchg.h::__cmpxchg (size == 8 case) */
/* 8 byte compare and swap, return 0:fail, 1:success */
static inline int
compare_and_swap(void *addr, unsigned long olddata, unsigned long newdata)
{
unsigned long oldval = 0, res = 0;
smp_mb();
do {
asm volatile("// __cmpxchg8\n"
" ldxr %1, %2\n"
" mov %w0, #0\n"
" cmp %1, %3\n"
" b.ne 1f\n"
" stxr %w0, %4, %2\n"
"1:\n"
: "=&r" (res), "=&r" (oldval), "+Q" (*(unsigned long *)addr)
: "Ir" (olddata), "r" (newdata)
: "cc");
} while (res);
smp_mb();
return (oldval == olddata);
}
#endif /* !__HEADER_ARM64_COMMON_CAS_H */

View File

@ -0,0 +1,32 @@
/* compiler.h COPYRIGHT FUJITSU LIMITED 2015-2016 */
#ifndef __ASM_COMPILER_H
#define __ASM_COMPILER_H
/* @ref.impl arch/arm64/include/asm/compiler.h::__asmeq(x,y) */
/*
* This is used to ensure the compiler did actually allocate the register we
* asked it for some inline assembly sequences. Apparently we can't trust the
* compiler from one version to another so a bit of paranoia won't hurt. This
* string is meant to be concatenated with the inline asm string and will
* cause compilation to stop on mismatch. (for details, see gcc PR 15089)
*/
#define __asmeq(x, y) ".ifnc " x "," y " ; .err ; .endif\n\t"
/* @ref.impl include/linux/compiler.h::__section(S) */
/* Simple shorthand for a section definition */
# define __section(S) __attribute__ ((__section__(#S)))
/* @ref.impl include/linux/compiler.h::__aligned(x) */
/*
* From the GCC manual:
*
* Many functions have no effects except the return value and their
* return value depends only on the parameters and/or global
* variables. Such a function can be subject to common subexpression
* elimination and loop optimization just as an arithmetic operator
* would be.
* [...]
*/
#define __aligned(x) __attribute__((aligned(x)))
#endif /* __ASM_COMPILER_H */

View File

@ -0,0 +1,23 @@
/* const.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef __HEADER_ARM64_COMMON_CONST_H
#define __HEADER_ARM64_COMMON_CONST_H
#ifndef __ASSEMBLY__
#define __AC(X,Y) (X##Y)
#define _AC(X,Y) __AC(X,Y)
#define _AT(T,X) ((T)(X))
#else /* !__ASSEMBLY__ */
#define _AC(X,Y) X
#define _AT(T,X) X
#endif /* !__ASSEMBLY__ */
#define _BITUL(x) (_AC(1,UL) << (x))
#define _BITULL(x) (_AC(1,ULL) << (x))
/*
* Allow for constants defined here to be used from assembly code
* by prepending the UL suffix only with actual C code compilation.
*/
#define UL(x) _AC(x, UL)
#endif /* !__HEADER_ARM64_COMMON_CONST_H */

View File

@ -0,0 +1,8 @@
/* context.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef __HEADER_ARM64_COMMON_CONTEXT_H
#define __HEADER_ARM64_COMMON_CONTEXT_H
void switch_mm(struct page_table *pgtbl);
void free_mmu_context(struct page_table *pgtbl);
#endif /*__HEADER_ARM64_COMMON_CONTEXT_H*/

View File

@ -0,0 +1,191 @@
/* cpufeature.h COPYRIGHT FUJITSU LIMITED 2017 */
#ifndef __ASM_CPUFEATURE_H
#define __ASM_CPUFEATURE_H
#include <types.h>
#include <cpuinfo.h>
#include <sysreg.h>
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
/* CPU feature register tracking */
enum ftr_type {
FTR_EXACT, /* Use a predefined safe value */
FTR_LOWER_SAFE, /* Smaller value is safe */
FTR_HIGHER_SAFE,/* Bigger value is safe */
};
#define FTR_STRICT (1) /* SANITY check strict matching required */
#define FTR_NONSTRICT (0) /* SANITY check ignored */
#define FTR_SIGNED (1) /* Value should be treated as signed */
#define FTR_UNSIGNED (0) /* Value should be treated as unsigned */
#define FTR_VISIBLE (1) /* Feature visible to the user space */
#define FTR_HIDDEN (0) /* Feature is hidden from the user */
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
struct arm64_ftr_bits {
int sign; /* Value is signed ? */
int visible;
int strict; /* CPU Sanity check: strict matching required ? */
enum ftr_type type;
uint8_t shift;
uint8_t width;
int64_t safe_val; /* safe value for FTR_EXACT features */
};
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
/*
* @arm64_ftr_reg - Feature register
* @strict_mask Bits which should match across all CPUs for sanity.
* @sys_val Safe value across the CPUs (system view)
*/
struct arm64_ftr_reg {
const char *name;
uint64_t strict_mask;
uint64_t user_mask;
uint64_t sys_val;
uint64_t user_val;
const struct arm64_ftr_bits *ftr_bits;
};
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
extern struct arm64_ftr_reg arm64_ftr_reg_ctrel0;
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
/* scope of capability check */
enum {
SCOPE_SYSTEM,
SCOPE_LOCAL_CPU,
};
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
struct arm64_cpu_capabilities {
const char *desc;
uint16_t capability;
int def_scope;/* default scope */
int (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
int (*enable)(void *);/* Called on all active CPUs */
union {
struct {/* To be used for erratum handling only */
uint32_t midr_model;
uint32_t midr_range_min, midr_range_max;
};
struct {/* Feature register checking */
uint32_t sys_reg;
uint8_t field_pos;
uint8_t min_field_value;
uint8_t hwcap_type;
int sign;
unsigned long hwcap;
};
};
};
/* @ref.impl include/linux/bitops.h */
/*
* Create a contiguous bitmask starting at bit position @l and ending at
* position @h. For example
* GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.
*/
#define GENMASK(h, l) \
(((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
static inline uint64_t arm64_ftr_mask(const struct arm64_ftr_bits *ftrp)
{
return (uint64_t)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift);
}
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
static inline int
cpuid_feature_extract_signed_field_width(uint64_t features, int field, int width)
{
return (int64_t)(features << (64 - width - field)) >> (64 - width);
}
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
static inline int
cpuid_feature_extract_signed_field(uint64_t features, int field)
{
return cpuid_feature_extract_signed_field_width(features, field, 4);
}
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
static inline unsigned int
cpuid_feature_extract_unsigned_field_width(uint64_t features, int field, int width)
{
return (uint64_t)(features << (64 - width - field)) >> (64 - width);
}
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
static inline unsigned int
cpuid_feature_extract_unsigned_field(uint64_t features, int field)
{
return cpuid_feature_extract_unsigned_field_width(features, field, 4);
}
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
static inline uint64_t arm64_ftr_reg_user_value(const struct arm64_ftr_reg *reg)
{
return (reg->user_val | (reg->sys_val & reg->user_mask));
}
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
static inline int
cpuid_feature_extract_field_width(uint64_t features, int field, int width, int sign)
{
return (sign) ?
cpuid_feature_extract_signed_field_width(features, field, width) :
cpuid_feature_extract_unsigned_field_width(features, field, width);
}
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
static inline int
cpuid_feature_extract_field(uint64_t features, int field, int sign)
{
return cpuid_feature_extract_field_width(features, field, 4, sign);
}
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
static inline int64_t arm64_ftr_value(const struct arm64_ftr_bits *ftrp, uint64_t val)
{
return (int64_t)cpuid_feature_extract_field_width(val, ftrp->shift, ftrp->width, ftrp->sign);
}
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
static inline int id_aa64pfr0_32bit_el0(uint64_t pfr0)
{
uint32_t val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
return val == ID_AA64PFR0_EL0_32BIT_64BIT;
}
/* @ref.impl arch/arm64/include/asm/cpufeature.h */
static inline int id_aa64pfr0_sve(uint64_t pfr0)
{
uint32_t val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_SVE_SHIFT);
return val > 0;
}
void setup_cpu_features(void);
void update_cpu_features(int cpu,
struct cpuinfo_arm64 *info,
struct cpuinfo_arm64 *boot);
uint64_t read_system_reg(uint32_t id);
void init_cpu_features(struct cpuinfo_arm64 *info);
int enable_mrs_emulation(void);
/* @ref.impl arch/arm64/include/asm/hwcap.h */
enum {
CAP_HWCAP = 1,
#ifdef CONFIG_COMPAT
CAP_COMPAT_HWCAP,
CAP_COMPAT_HWCAP2,
#endif
};
#endif /* __ASM_CPUFEATURE_H */

View File

@ -0,0 +1,34 @@
/* cpuinfo.h COPYRIGHT FUJITSU LIMITED 2016-2017 */
#ifndef __HEADER_ARM64_COMMON_CPUINFO_H
#define __HEADER_ARM64_COMMON_CPUINFO_H
#include <types.h>
/* @ref.impl arch/arm64/include/cpu.h */
/*
* Records attributes of an individual CPU.
*/
struct cpuinfo_arm64 {
uint32_t reg_midr;
unsigned int hwid; /* McKernel Original. */
uint32_t reg_ctr;
uint32_t reg_cntfrq;
uint32_t reg_dczid;
uint32_t reg_revidr;
uint64_t reg_id_aa64dfr0;
uint64_t reg_id_aa64dfr1;
uint64_t reg_id_aa64isar0;
uint64_t reg_id_aa64isar1;
uint64_t reg_id_aa64mmfr0;
uint64_t reg_id_aa64mmfr1;
uint64_t reg_id_aa64mmfr2;
uint64_t reg_id_aa64pfr0;
uint64_t reg_id_aa64pfr1;
uint64_t reg_id_aa64zfr0;
uint64_t reg_zcr;
};
#endif /* !__HEADER_ARM64_COMMON_CPUINFO_H */

View File

@ -0,0 +1,13 @@
/* cpulocal.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef __HEADER_ARM64_COMMON_CPULOCAL_H
#define __HEADER_ARM64_COMMON_CPULOCAL_H
#include <types.h>
#include <registers.h>
#include <thread_info.h>
union arm64_cpu_local_variables *get_arm64_cpu_local_variable(int id);
union arm64_cpu_local_variables *get_arm64_this_cpu_local(void);
void *get_arm64_this_cpu_kstack(void);
#endif /* !__HEADER_ARM64_COMMON_CPULOCAL_H */

View File

@ -0,0 +1,12 @@
/* cputable.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef __HEADER_ARM64_COMMON_CPUTABLE_H
#define __HEADER_ARM64_COMMON_CPUTABLE_H
struct cpu_info {
unsigned int cpu_id_val;
unsigned int cpu_id_mask;
const char *cpu_name;
unsigned long (*cpu_setup)(void);
};
#endif /* !__HEADER_ARM64_COMMON_CPUTABLE_H */

View File

@ -0,0 +1,49 @@
/* cputype.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
/* @ref.impl arch/arm64/include/asm/cputype.h */
#ifndef __HEADER_ARM64_COMMON_CPUTYPE_H
#define __HEADER_ARM64_COMMON_CPUTYPE_H
#include <sysreg.h>
#define MPIDR_LEVEL_BITS_SHIFT 3
#define MPIDR_LEVEL_BITS (1 << MPIDR_LEVEL_BITS_SHIFT)
#define MPIDR_LEVEL_MASK ((1 << MPIDR_LEVEL_BITS) - 1)
#define MPIDR_LEVEL_SHIFT(level) \
(((1 << level) >> 1) << MPIDR_LEVEL_BITS_SHIFT)
#define MPIDR_AFFINITY_LEVEL(mpidr, level) \
((mpidr >> MPIDR_LEVEL_SHIFT(level)) & MPIDR_LEVEL_MASK)
#define read_cpuid(reg) read_sysreg_s(SYS_ ## reg)
#define MIDR_REVISION_MASK 0xf
#define MIDR_REVISION(midr) ((midr) & MIDR_REVISION_MASK)
#define MIDR_PARTNUM_SHIFT 4
#define MIDR_PARTNUM_MASK (0xfff << MIDR_PARTNUM_SHIFT)
#define MIDR_PARTNUM(midr) \
(((midr) & MIDR_PARTNUM_MASK) >> MIDR_PARTNUM_SHIFT)
#define MIDR_VARIANT_SHIFT 20
#define MIDR_VARIANT_MASK (0xf << MIDR_VARIANT_SHIFT)
#define MIDR_VARIANT(midr) \
(((midr) & MIDR_VARIANT_MASK) >> MIDR_VARIANT_SHIFT)
#define MIDR_IMPLEMENTOR_SHIFT 24
#define MIDR_IMPLEMENTOR_MASK (0xff << MIDR_IMPLEMENTOR_SHIFT)
#define MIDR_IMPLEMENTOR(midr) \
(((midr) & MIDR_IMPLEMENTOR_MASK) >> MIDR_IMPLEMENTOR_SHIFT)
#define ARM_CPU_IMP_CAVIUM 0x43
#ifndef __ASSEMBLY__
static unsigned int read_cpuid_id(void)
{
return read_cpuid(MIDR_EL1);
}
#endif /* !__ASSEMBLY__ */
#endif /* !__HEADER_ARM64_COMMON_CPUTYPE_H */

View File

@ -0,0 +1,35 @@
/* debug-monitors.h COPYRIGHT FUJITSU LIMITED 2016-2017 */
#ifndef __HEADER_ARM64_COMMON_DEBUG_MONITORS_H
#define __HEADER_ARM64_COMMON_DEBUG_MONITORS_H
/* Low-level stepping controls. */
#define DBG_MDSCR_SS (1 << 0)
#define DBG_SPSR_SS (1 << 21)
/* MDSCR_EL1 enabling bits */
#define DBG_MDSCR_KDE (1 << 13)
#define DBG_MDSCR_MDE (1 << 15)
#define DBG_MDSCR_MASK ~(DBG_MDSCR_KDE | DBG_MDSCR_MDE)
#define DBG_ESR_EVT(x) (((x) >> 27) & 0x7)
/* AArch64 */
#define DBG_ESR_EVT_HWBP 0x0
#define DBG_ESR_EVT_HWSS 0x1
#define DBG_ESR_EVT_HWWP 0x2
#define DBG_ESR_EVT_BRK 0x6
#ifndef __ASSEMBLY__
unsigned char debug_monitors_arch(void);
void mdscr_write(unsigned int mdscr);
unsigned int mdscr_read(void);
void debug_monitors_init(void);
struct pt_regs;
void set_regs_spsr_ss(struct pt_regs *regs);
void clear_regs_spsr_ss(struct pt_regs *regs);
#endif /* !__ASSEMBLY__ */
#endif /* !__HEADER_ARM64_COMMON_DEBUG_MONITORS_H */

View File

@ -0,0 +1,28 @@
/* elf.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
#ifndef __HEADER_ARM64_COMMON_ELF_H
#define __HEADER_ARM64_COMMON_ELF_H
#include <ihk/context.h>
/* ELF target machines defined */
#define EM_AARCH64 183
/* ELF header defined */
#define ELF_CLASS ELFCLASS64
#define ELF_DATA ELFDATA2LSB
#define ELF_OSABI ELFOSABI_NONE
#define ELF_ABIVERSION El_ABIVERSION_NONE
#define ELF_ARCH EM_AARCH64
#define ELF_NGREG64 (sizeof (struct user_pt_regs) / sizeof(elf_greg64_t))
/* PTRACE_GETREGSET and PTRACE_SETREGSET requests. */
#define NT_ARM_TLS 0x401 /* ARM TLS register */
#define NT_ARM_HW_BREAK 0x402 /* ARM hardware breakpoint registers */
#define NT_ARM_HW_WATCH 0x403 /* ARM hardware watchpoint registers */
#define NT_ARM_SYSTEM_CALL 0x404 /* ARM system call number */
#define NT_ARM_SVE 0x405 /* ARM Scalable Vector Extension registers */
typedef elf_greg64_t elf_gregset64_t[ELF_NGREG64];
#endif /* __HEADER_ARM64_COMMON_ELF_H */

View File

@ -0,0 +1,92 @@
/* elfcore.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef POSTK_DEBUG_ARCH_DEP_18 /* coredump arch separation. */
#ifndef __HEADER_ARM64_COMMON_ELFCORE_H
#define __HEADER_ARM64_COMMON_ELFCORE_H
typedef uint16_t Elf64_Half;
typedef uint32_t Elf64_Word;
typedef uint64_t Elf64_Xword;
typedef uint64_t Elf64_Addr;
typedef uint64_t Elf64_Off;
#define EI_NIDENT 16
typedef struct {
unsigned char e_ident[EI_NIDENT];
Elf64_Half e_type;
Elf64_Half e_machine;
Elf64_Word e_version;
Elf64_Addr e_entry;
Elf64_Off e_phoff;
Elf64_Off e_shoff;
Elf64_Word e_flags;
Elf64_Half e_ehsize;
Elf64_Half e_phentsize;
Elf64_Half e_phnum;
Elf64_Half e_shentsize;
Elf64_Half e_shnum;
Elf64_Half e_shstrndx;
} Elf64_Ehdr;
#define EI_MAG0 0
#define EI_MAG1 1
#define EI_MAG2 2
#define EI_MAG3 3
#define EI_CLASS 4
#define EI_DATA 5
#define EI_VERSION 6
#define EI_OSABI 7
#define EI_ABIVERSION 8
#define EI_PAD 9
#define ELFMAG0 0x7f
#define ELFMAG1 'E'
#define ELFMAG2 'L'
#define ELFMAG3 'F'
#define ELFCLASS64 2 /* 64-bit object */
#define ELFDATA2LSB 1 /* LSB */
#define El_VERSION 1 /* defined to be the same as EV CURRENT */
#define ELFOSABI_NONE 0 /* unspecied */
#define El_ABIVERSION_NONE 0 /* unspecied */
#define ET_CORE 4 /* Core file */
#define EM_X86_64 62 /* AMD x86-64 architecture */
#define EM_K10M 181 /* Intel K10M */
#define EV_CURRENT 1 /* Current version */
typedef struct {
Elf64_Word p_type;
Elf64_Word p_flags;
Elf64_Off p_offset;
Elf64_Addr p_vaddr;
Elf64_Addr p_paddr;
Elf64_Xword p_filesz;
Elf64_Xword p_memsz;
Elf64_Xword p_align;
} Elf64_Phdr;
#define PT_LOAD 1
#define PT_NOTE 4
#define PF_X 1 /* executable bit */
#define PF_W 2 /* writable bit */
#define PF_R 4 /* readable bit */
struct note {
Elf64_Word namesz;
Elf64_Word descsz;
Elf64_Word type;
/* name char[namesz] and desc[descsz] */
};
#define NT_PRSTATUS 1
#define NT_PRFRPREG 2
#define NT_PRPSINFO 3
#define NT_AUXV 6
#define NT_X86_STATE 0x202
#include "elfcoregpl.h"
#endif /* !__HEADER_ARM64_COMMON_ELFCORE_H */
#endif /* !POSTK_DEBUG_ARCH_DEP_18 */

View File

@ -0,0 +1,98 @@
/* elfcoregpl.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef POSTK_DEBUG_ARCH_DEP_18 /* coredump arch separation. */
#ifndef __HEADER_ARM64_COMMON_ELFCOREGPL_H
#define __HEADER_ARM64_COMMON_ELFCOREGPL_H
#define pid_t int
/* From /usr/include/linux/elfcore.h of Linux */
#define ELF_PRARGSZ (80)
/* From /usr/include/linux/elfcore.h fro Linux */
struct elf_siginfo
{
int si_signo;
int si_code;
int si_errno;
};
/* From bfd/hosts/x86-64linux.h of gdb. */
typedef uint64_t __attribute__ ((__aligned__ (8))) a8_uint64_t;
typedef a8_uint64_t elf_greg64_t;
struct user_regs64_struct
{
a8_uint64_t r15;
a8_uint64_t r14;
a8_uint64_t r13;
a8_uint64_t r12;
a8_uint64_t rbp;
a8_uint64_t rbx;
a8_uint64_t r11;
a8_uint64_t r10;
a8_uint64_t r9;
a8_uint64_t r8;
a8_uint64_t rax;
a8_uint64_t rcx;
a8_uint64_t rdx;
a8_uint64_t rsi;
a8_uint64_t rdi;
a8_uint64_t orig_rax;
a8_uint64_t rip;
a8_uint64_t cs;
a8_uint64_t eflags;
a8_uint64_t rsp;
a8_uint64_t ss;
a8_uint64_t fs_base;
a8_uint64_t gs_base;
a8_uint64_t ds;
a8_uint64_t es;
a8_uint64_t fs;
a8_uint64_t gs;
};
#define ELF_NGREG64 (sizeof (struct user_regs64_struct) / sizeof(elf_greg64_t))
typedef elf_greg64_t elf_gregset64_t[ELF_NGREG64];
struct prstatus64_timeval
{
a8_uint64_t tv_sec;
a8_uint64_t tv_usec;
};
struct elf_prstatus64
{
struct elf_siginfo pr_info;
short int pr_cursig;
a8_uint64_t pr_sigpend;
a8_uint64_t pr_sighold;
pid_t pr_pid;
pid_t pr_ppid;
pid_t pr_pgrp;
pid_t pr_sid;
struct prstatus64_timeval pr_utime;
struct prstatus64_timeval pr_stime;
struct prstatus64_timeval pr_cutime;
struct prstatus64_timeval pr_cstime;
elf_gregset64_t pr_reg;
int pr_fpvalid;
};
struct elf_prpsinfo64
{
char pr_state;
char pr_sname;
char pr_zomb;
char pr_nice;
a8_uint64_t pr_flag;
unsigned int pr_uid;
unsigned int pr_gid;
int pr_pid, pr_ppid, pr_pgrp, pr_sid;
char pr_fname[16];
char pr_psargs[ELF_PRARGSZ];
};
#endif /* !__HEADER_ARM64_COMMON_ELFCOREGPL_H */
#endif /* !POSTK_DEBUG_ARCH_DEP_18 */

View File

@ -0,0 +1,60 @@
/* elfnote.h COPYRIGHT FUJITSU LIMITED 2016 */
/* @ref.impl include/linux/elfnote.h */
/*
* Helper macros to generate ELF Note structures, which are put into a
* PT_NOTE segment of the final vmlinux image. These are useful for
* including name-value pairs of metadata into the kernel binary (or
* modules?) for use by external programs.
*
* Each note has three parts: a name, a type and a desc. The name is
* intended to distinguish the note's originator, so it would be a
* company, project, subsystem, etc; it must be in a suitable form for
* use in a section name. The type is an integer which is used to tag
* the data, and is considered to be within the "name" namespace (so
* "FooCo"'s type 42 is distinct from "BarProj"'s type 42). The
* "desc" field is the actual data. There are no constraints on the
* desc field's contents, though typically they're fairly small.
*
* All notes from a given NAME are put into a section named
* .note.NAME. When the kernel image is finally linked, all the notes
* are packed into a single .notes section, which is mapped into the
* PT_NOTE segment. Because notes for a given name are grouped into
* the same section, they'll all be adjacent the output file.
*
* This file defines macros for both C and assembler use. Their
* syntax is slightly different, but they're semantically similar.
*
* See the ELF specification for more detail about ELF notes.
*/
#ifndef __HEADER_ARM64_COMMON_ELFNOTE_H
#define __HEADER_ARM64_COMMON_ELFNOTE_H
#ifdef __ASSEMBLER__
/*
* Generate a structure with the same shape as Elf{32,64}_Nhdr (which
* turn out to be the same size and shape), followed by the name and
* desc data with appropriate padding. The 'desctype' argument is the
* assembler pseudo op defining the type of the data e.g. .asciz while
* 'descdata' is the data itself e.g. "hello, world".
*
* e.g. ELFNOTE(XYZCo, 42, .asciz, "forty-two")
* ELFNOTE(XYZCo, 12, .long, 0xdeadbeef)
*/
#define ELFNOTE_START(name, type, flags) \
.pushsection .note.name, flags,@note ; \
.balign 4 ; \
.long 2f - 1f /* namesz */ ; \
.long 4484f - 3f /* descsz */ ; \
.long type ; \
1:.asciz #name ; \
2:.balign 4 ; \
3:
#define ELFNOTE_END \
4484:.balign 4 ; \
.popsection ;
#endif /* __ASSEMBLER__ */
#endif /* !__HEADER_ARM64_COMMON_ELFNOTE_H */

View File

@ -0,0 +1,112 @@
/* errno.h COPYRIGHT FUJITSU LIMITED 2016 */
#ifndef __HEADER_ARM64_COMMON_ERRNO_H
#define __HEADER_ARM64_COMMON_ERRNO_H
#include <generic-errno.h>
#define EDEADLK 35 /* Resource deadlock would occur */
#define ENAMETOOLONG 36 /* File name too long */
#define ENOLCK 37 /* No record locks available */
#define ENOSYS 38 /* Function not implemented */
#define ENOTEMPTY 39 /* Directory not empty */
#define ELOOP 40 /* Too many symbolic links encountered */
#define EWOULDBLOCK EAGAIN /* Operation would block */
#define ENOMSG 42 /* No message of desired type */
#define EIDRM 43 /* Identifier removed */
#define ECHRNG 44 /* Channel number out of range */
#define EL2NSYNC 45 /* Level 2 not synchronized */
#define EL3HLT 46 /* Level 3 halted */
#define EL3RST 47 /* Level 3 reset */
#define ELNRNG 48 /* Link number out of range */
#define EUNATCH 49 /* Protocol driver not attached */
#define ENOCSI 50 /* No CSI structure available */
#define EL2HLT 51 /* Level 2 halted */
#define EBADE 52 /* Invalid exchange */
#define EBADR 53 /* Invalid request descriptor */
#define EXFULL 54 /* Exchange full */
#define ENOANO 55 /* No anode */
#define EBADRQC 56 /* Invalid request code */
#define EBADSLT 57 /* Invalid slot */
#define EDEADLOCK EDEADLK
#define EBFONT 59 /* Bad font file format */
#define ENOSTR 60 /* Device not a stream */
#define ENODATA 61 /* No data available */
#define ETIME 62 /* Timer expired */
#define ENOSR 63 /* Out of streams resources */
#define ENONET 64 /* Machine is not on the network */
#define ENOPKG 65 /* Package not installed */
#define EREMOTE 66 /* Object is remote */
#define ENOLINK 67 /* Link has been severed */
#define EADV 68 /* Advertise error */
#define ESRMNT 69 /* Srmount error */
#define ECOMM 70 /* Communication error on send */
#define EPROTO 71 /* Protocol error */
#define EMULTIHOP 72 /* Multihop attempted */
#define EDOTDOT 73 /* RFS specific error */
#define EBADMSG 74 /* Not a data message */
#define EOVERFLOW 75 /* Value too large for defined data type */
#define ENOTUNIQ 76 /* Name not unique on network */
#define EBADFD 77 /* File descriptor in bad state */
#define EREMCHG 78 /* Remote address changed */
#define ELIBACC 79 /* Can not access a needed shared library */
#define ELIBBAD 80 /* Accessing a corrupted shared library */
#define ELIBSCN 81 /* .lib section in a.out corrupted */
#define ELIBMAX 82 /* Attempting to link in too many shared libraries */
#define ELIBEXEC 83 /* Cannot exec a shared library directly */
#define EILSEQ 84 /* Illegal byte sequence */
#define ERESTART 85 /* Interrupted system call should be restarted */
#define ESTRPIPE 86 /* Streams pipe error */
#define EUSERS 87 /* Too many users */
#define ENOTSOCK 88 /* Socket operation on non-socket */
#define EDESTADDRREQ 89 /* Destination address required */
#define EMSGSIZE 90 /* Message too long */
#define EPROTOTYPE 91 /* Protocol wrong type for socket */
#define ENOPROTOOPT 92 /* Protocol not available */
#define EPROTONOSUPPORT 93 /* Protocol not supported */
#define ESOCKTNOSUPPORT 94 /* Socket type not supported */
#define EOPNOTSUPP 95 /* Operation not supported on transport endpoint */
#define EPFNOSUPPORT 96 /* Protocol family not supported */
#define EAFNOSUPPORT 97 /* Address family not supported by protocol */
#define EADDRINUSE 98 /* Address already in use */
#define EADDRNOTAVAIL 99 /* Cannot assign requested address */
#define ENETDOWN 100 /* Network is down */
#define ENETUNREACH 101 /* Network is unreachable */
#define ENETRESET 102 /* Network dropped connection because of reset */
#define ECONNABORTED 103 /* Software caused connection abort */
#define ECONNRESET 104 /* Connection reset by peer */
#define ENOBUFS 105 /* No buffer space available */
#define EISCONN 106 /* Transport endpoint is already connected */
#define ENOTCONN 107 /* Transport endpoint is not connected */
#define ESHUTDOWN 108 /* Cannot send after transport endpoint shutdown */
#define ETOOMANYREFS 109 /* Too many references: cannot splice */
#define ETIMEDOUT 110 /* Connection timed out */
#define ECONNREFUSED 111 /* Connection refused */
#define EHOSTDOWN 112 /* Host is down */
#define EHOSTUNREACH 113 /* No route to host */
#define EALREADY 114 /* Operation already in progress */
#define EINPROGRESS 115 /* Operation now in progress */
#define ESTALE 116 /* Stale NFS file handle */
#define EUCLEAN 117 /* Structure needs cleaning */
#define ENOTNAM 118 /* Not a XENIX named type file */
#define ENAVAIL 119 /* No XENIX semaphores available */
#define EISNAM 120 /* Is a named type file */
#define EREMOTEIO 121 /* Remote I/O error */
#define EDQUOT 122 /* Quota exceeded */
#define ENOMEDIUM 123 /* No medium found */
#define EMEDIUMTYPE 124 /* Wrong medium type */
#define ECANCELED 125 /* Operation Canceled */
#define ENOKEY 126 /* Required key not available */
#define EKEYEXPIRED 127 /* Key has expired */
#define EKEYREVOKED 128 /* Key has been revoked */
#define EKEYREJECTED 129 /* Key was rejected by service */
/* for robust mutexes */
#define EOWNERDEAD 130 /* Owner died */
#define ENOTRECOVERABLE 131 /* State not recoverable */
#define ERFKILL 132 /* Operation not possible due to RF-kill */
#endif

View File

@ -0,0 +1,180 @@
/* esr.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
/*
* Copyright (C) 2013 - ARM Ltd
* Author: Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ASM_ESR_H
#define __ASM_ESR_H
#include <const.h>
#define ESR_ELx_EC_UNKNOWN (0x00)
#define ESR_ELx_EC_WFx (0x01)
/* Unallocated EC: 0x02 */
#define ESR_ELx_EC_CP15_32 (0x03)
#define ESR_ELx_EC_CP15_64 (0x04)
#define ESR_ELx_EC_CP14_MR (0x05)
#define ESR_ELx_EC_CP14_LS (0x06)
#define ESR_ELx_EC_FP_ASIMD (0x07)
#define ESR_ELx_EC_CP10_ID (0x08)
/* Unallocated EC: 0x09 - 0x0B */
#define ESR_ELx_EC_CP14_64 (0x0C)
/* Unallocated EC: 0x0d */
#define ESR_ELx_EC_ILL (0x0E)
/* Unallocated EC: 0x0F - 0x10 */
#define ESR_ELx_EC_SVC32 (0x11)
#define ESR_ELx_EC_HVC32 (0x12)
#define ESR_ELx_EC_SMC32 (0x13)
/* Unallocated EC: 0x14 */
#define ESR_ELx_EC_SVC64 (0x15)
#define ESR_ELx_EC_HVC64 (0x16)
#define ESR_ELx_EC_SMC64 (0x17)
#define ESR_ELx_EC_SYS64 (0x18)
#define ESR_ELx_EC_SVE (0x19)
/* Unallocated EC: 0x1A - 0x1E */
#define ESR_ELx_EC_IMP_DEF (0x1f)
#define ESR_ELx_EC_IABT_LOW (0x20)
#define ESR_ELx_EC_IABT_CUR (0x21)
#define ESR_ELx_EC_PC_ALIGN (0x22)
/* Unallocated EC: 0x23 */
#define ESR_ELx_EC_DABT_LOW (0x24)
#define ESR_ELx_EC_DABT_CUR (0x25)
#define ESR_ELx_EC_SP_ALIGN (0x26)
/* Unallocated EC: 0x27 */
#define ESR_ELx_EC_FP_EXC32 (0x28)
/* Unallocated EC: 0x29 - 0x2B */
#define ESR_ELx_EC_FP_EXC64 (0x2C)
/* Unallocated EC: 0x2D - 0x2E */
#define ESR_ELx_EC_SERROR (0x2F)
#define ESR_ELx_EC_BREAKPT_LOW (0x30)
#define ESR_ELx_EC_BREAKPT_CUR (0x31)
#define ESR_ELx_EC_SOFTSTP_LOW (0x32)
#define ESR_ELx_EC_SOFTSTP_CUR (0x33)
#define ESR_ELx_EC_WATCHPT_LOW (0x34)
#define ESR_ELx_EC_WATCHPT_CUR (0x35)
/* Unallocated EC: 0x36 - 0x37 */
#define ESR_ELx_EC_BKPT32 (0x38)
/* Unallocated EC: 0x39 */
#define ESR_ELx_EC_VECTOR32 (0x3A)
/* Unallocted EC: 0x3B */
#define ESR_ELx_EC_BRK64 (0x3C)
/* Unallocated EC: 0x3D - 0x3F */
#define ESR_ELx_EC_MAX (0x3F)
#define ESR_ELx_EC_SHIFT (26)
#define ESR_ELx_EC_MASK (UL(0x3F) << ESR_ELx_EC_SHIFT)
#define ESR_ELx_EC(esr) (((esr) & ESR_ELx_EC_MASK) >> ESR_ELx_EC_SHIFT)
#define ESR_ELx_IL (UL(1) << 25)
#define ESR_ELx_ISS_MASK (ESR_ELx_IL - 1)
/* ISS field definitions shared by different classes */
#define ESR_ELx_WNR (UL(1) << 6)
/* Shared ISS field definitions for Data/Instruction aborts */
#define ESR_ELx_EA (UL(1) << 9)
#define ESR_ELx_S1PTW (UL(1) << 7)
/* Shared ISS fault status code(IFSC/DFSC) for Data/Instruction aborts */
#define ESR_ELx_FSC (0x3F)
#define ESR_ELx_FSC_TYPE (0x3C)
#define ESR_ELx_FSC_EXTABT (0x10)
#define ESR_ELx_FSC_ACCESS (0x08)
#define ESR_ELx_FSC_FAULT (0x04)
#define ESR_ELx_FSC_PERM (0x0C)
/* ISS field definitions for Data Aborts */
#define ESR_ELx_ISV (UL(1) << 24)
#define ESR_ELx_SAS_SHIFT (22)
#define ESR_ELx_SAS (UL(3) << ESR_ELx_SAS_SHIFT)
#define ESR_ELx_SSE (UL(1) << 21)
#define ESR_ELx_SRT_SHIFT (16)
#define ESR_ELx_SRT_MASK (UL(0x1F) << ESR_ELx_SRT_SHIFT)
#define ESR_ELx_SF (UL(1) << 15)
#define ESR_ELx_AR (UL(1) << 14)
#define ESR_ELx_CM (UL(1) << 8)
/* ISS field definitions for exceptions taken in to Hyp */
#define ESR_ELx_CV (UL(1) << 24)
#define ESR_ELx_COND_SHIFT (20)
#define ESR_ELx_COND_MASK (UL(0xF) << ESR_ELx_COND_SHIFT)
#define ESR_ELx_WFx_ISS_WFE (UL(1) << 0)
#define ESR_ELx_xVC_IMM_MASK ((1UL << 16) - 1)
/* ESR value templates for specific events */
/* BRK instruction trap from AArch64 state */
#define ESR_ELx_VAL_BRK64(imm) \
((ESR_ELx_EC_BRK64 << ESR_ELx_EC_SHIFT) | ESR_ELx_IL | \
((imm) & 0xffff))
/* ISS field definitions for System instruction traps */
#define ESR_ELx_SYS64_ISS_RES0_SHIFT 22
#define ESR_ELx_SYS64_ISS_RES0_MASK (UL(0x7) << ESR_ELx_SYS64_ISS_RES0_SHIFT)
#define ESR_ELx_SYS64_ISS_DIR_MASK 0x1
#define ESR_ELx_SYS64_ISS_DIR_READ 0x1
#define ESR_ELx_SYS64_ISS_DIR_WRITE 0x0
#define ESR_ELx_SYS64_ISS_RT_SHIFT 5
#define ESR_ELx_SYS64_ISS_RT_MASK (UL(0x1f) << ESR_ELx_SYS64_ISS_RT_SHIFT)
#define ESR_ELx_SYS64_ISS_CRM_SHIFT 1
#define ESR_ELx_SYS64_ISS_CRM_MASK (UL(0xf) << ESR_ELx_SYS64_ISS_CRM_SHIFT)
#define ESR_ELx_SYS64_ISS_CRN_SHIFT 10
#define ESR_ELx_SYS64_ISS_CRN_MASK (UL(0xf) << ESR_ELx_SYS64_ISS_CRN_SHIFT)
#define ESR_ELx_SYS64_ISS_OP1_SHIFT 14
#define ESR_ELx_SYS64_ISS_OP1_MASK (UL(0x7) << ESR_ELx_SYS64_ISS_OP1_SHIFT)
#define ESR_ELx_SYS64_ISS_OP2_SHIFT 17
#define ESR_ELx_SYS64_ISS_OP2_MASK (UL(0x7) << ESR_ELx_SYS64_ISS_OP2_SHIFT)
#define ESR_ELx_SYS64_ISS_OP0_SHIFT 20
#define ESR_ELx_SYS64_ISS_OP0_MASK (UL(0x3) << ESR_ELx_SYS64_ISS_OP0_SHIFT)
#define ESR_ELx_SYS64_ISS_SYS_MASK (ESR_ELx_SYS64_ISS_OP0_MASK | \
ESR_ELx_SYS64_ISS_OP1_MASK | \
ESR_ELx_SYS64_ISS_OP2_MASK | \
ESR_ELx_SYS64_ISS_CRN_MASK | \
ESR_ELx_SYS64_ISS_CRM_MASK)
#define ESR_ELx_SYS64_ISS_SYS_VAL(op0, op1, op2, crn, crm) \
(((op0) << ESR_ELx_SYS64_ISS_OP0_SHIFT) | \
((op1) << ESR_ELx_SYS64_ISS_OP1_SHIFT) | \
((op2) << ESR_ELx_SYS64_ISS_OP2_SHIFT) | \
((crn) << ESR_ELx_SYS64_ISS_CRN_SHIFT) | \
((crm) << ESR_ELx_SYS64_ISS_CRM_SHIFT))
#define ESR_ELx_SYS64_ISS_SYS_OP_MASK (ESR_ELx_SYS64_ISS_SYS_MASK | \
ESR_ELx_SYS64_ISS_DIR_MASK)
/*
* User space cache operations have the following sysreg encoding
* in System instructions.
* op0=1, op1=3, op2=1, crn=7, crm={ 5, 10, 11, 14 }, WRITE (L=0)
*/
#define ESR_ELx_SYS64_ISS_CRM_DC_CIVAC 14
#define ESR_ELx_SYS64_ISS_CRM_DC_CVAU 11
#define ESR_ELx_SYS64_ISS_CRM_DC_CVAC 10
#define ESR_ELx_SYS64_ISS_CRM_IC_IVAU 5
#define ESR_ELx_SYS64_ISS_EL0_CACHE_OP_MASK (ESR_ELx_SYS64_ISS_OP0_MASK | \
ESR_ELx_SYS64_ISS_OP1_MASK | \
ESR_ELx_SYS64_ISS_OP2_MASK | \
ESR_ELx_SYS64_ISS_CRN_MASK | \
ESR_ELx_SYS64_ISS_DIR_MASK)
#define ESR_ELx_SYS64_ISS_EL0_CACHE_OP_VAL \
(ESR_ELx_SYS64_ISS_SYS_VAL(1, 3, 1, 7, 0) | \
ESR_ELx_SYS64_ISS_DIR_WRITE)
#define ESR_ELx_SYS64_ISS_SYS_CTR ESR_ELx_SYS64_ISS_SYS_VAL(3, 3, 1, 0, 0)
#define ESR_ELx_SYS64_ISS_SYS_CTR_READ (ESR_ELx_SYS64_ISS_SYS_CTR | \
ESR_ELx_SYS64_ISS_DIR_READ)
#endif /* __ASM_ESR_H */

View File

@ -0,0 +1,99 @@
/* fpsimd.h COPYRIGHT FUJITSU LIMITED 2016-2017 */
#ifndef __HEADER_ARM64_COMMON_FPSIMD_H
#define __HEADER_ARM64_COMMON_FPSIMD_H
#include <ptrace.h>
#ifndef __ASSEMBLY__
/*
* FP/SIMD storage area has:
* - FPSR and FPCR
* - 32 128-bit data registers
*
* Note that user_fpsimd forms a prefix of this structure, which is
* relied upon in the ptrace FP/SIMD accessors.
*/
/* @ref.impl arch/arm64/include/asm/fpsimd.h::struct fpsimd_state */
struct fpsimd_state {
union {
struct user_fpsimd_state user_fpsimd;
struct {
__uint128_t vregs[32];
unsigned int fpsr;
unsigned int fpcr;
/*
* For ptrace compatibility, pad to next 128-bit
* boundary here if extending this struct.
*/
};
};
/* the id of the last cpu to have restored this state */
unsigned int cpu;
};
/* need for struct process */
typedef struct fpsimd_state fp_regs_struct;
extern void thread_fpsimd_to_sve(struct thread *thread, fp_regs_struct *fp_regs);
extern void thread_sve_to_fpsimd(struct thread *thread, fp_regs_struct *fp_regs);
#ifdef CONFIG_ARM64_SVE
extern size_t sve_state_size(struct thread const *thread);
extern void sve_free(struct thread *thread);
extern void sve_alloc(struct thread *thread);
extern void sve_save_state(void *state, unsigned int *pfpsr);
extern void sve_load_state(void const *state, unsigned int const *pfpsr, unsigned long vq_minus_1);
extern unsigned int sve_get_vl(void);
extern int sve_set_thread_vl(struct thread *thread, const unsigned long vector_length, const unsigned long flags);
extern int sve_get_thread_vl(const struct thread *thread);
extern int sve_set_vector_length(struct thread *thread, unsigned long vl, unsigned long flags);
#define SVE_SET_VL(thread, vector_length, flags) sve_set_thread_vl(thread, vector_length, flags)
#define SVE_GET_VL(thread) sve_get_thread_vl(thread)
#else /* CONFIG_ARM64_SVE */
#include <ihk/debug.h>
#include <errno.h>
static void sve_save_state(void *state, unsigned int *pfpsr)
{
panic("PANIC:sve_save_state() was called CONFIG_ARM64_SVE off.\n");
}
static void sve_load_state(void const *state, unsigned int const *pfpsr, unsigned long vq_minus_1)
{
panic("PANIC:sve_load_state() was called CONFIG_ARM64_SVE off.\n");
}
static unsigned int sve_get_vl(void)
{
panic("PANIC:sve_get_vl() was called CONFIG_ARM64_SVE off.\n");
return (unsigned int)-1;
}
static int sve_set_vector_length(struct thread *thread, unsigned long vl, unsigned long flags)
{
return -EINVAL;
}
/* for prctl syscall */
#define SVE_SET_VL(a,b,c) (-EINVAL)
#define SVE_GET_VL(a) (-EINVAL)
#endif /* CONFIG_ARM64_SVE */
extern void init_sve_vl(void);
extern void fpsimd_save_state(struct fpsimd_state *state);
extern void fpsimd_load_state(struct fpsimd_state *state);
extern void thread_fpsimd_save(struct thread *thread);
extern void thread_fpsimd_load(struct thread *thread);
extern int sve_max_vl;
extern int sve_default_vl;
#endif /* !__ASSEMBLY__ */
#endif /* !__HEADER_ARM64_COMMON_FPSIMD_H */

View File

@ -0,0 +1,151 @@
/* fpsimdmacros.h COPYRIGHT FUJITSU LIMITED 2016-2017 */
.macro _check_reg nr
.if (\nr) < 0 || (\nr) > 31
.error "Bad register number \nr."
.endif
.endm
.macro _check_zreg znr
.if (\znr) < 0 || (\znr) > 31
.error "Bad Scalable Vector Extension vector register number \znr."
.endif
.endm
.macro _check_preg pnr
.if (\pnr) < 0 || (\pnr) > 15
.error "Bad Scalable Vector Extension predicate register number \pnr."
.endif
.endm
.macro _check_num n, min, max
.if (\n) < (\min) || (\n) > (\max)
.error "Number \n out of range [\min,\max]"
.endif
.endm
.macro _zstrv znt, nspb, ioff=0
_check_zreg \znt
_check_reg \nspb
_check_num (\ioff), -0x100, 0xff
.inst 0xe5804000 \
| (\znt) \
| ((\nspb) << 5) \
| (((\ioff) & 7) << 10) \
| (((\ioff) & 0x1f8) << 13)
.endm
.macro _zldrv znt, nspb, ioff=0
_check_zreg \znt
_check_reg \nspb
_check_num (\ioff), -0x100, 0xff
.inst 0x85804000 \
| (\znt) \
| ((\nspb) << 5) \
| (((\ioff) & 7) << 10) \
| (((\ioff) & 0x1f8) << 13)
.endm
.macro _zstrp pnt, nspb, ioff=0
_check_preg \pnt
_check_reg \nspb
_check_num (\ioff), -0x100, 0xff
.inst 0xe5800000 \
| (\pnt) \
| ((\nspb) << 5) \
| (((\ioff) & 7) << 10) \
| (((\ioff) & 0x1f8) << 13)
.endm
.macro _zldrp pnt, nspb, ioff=0
_check_preg \pnt
_check_reg \nspb
_check_num (\ioff), -0x100, 0xff
.inst 0x85800000 \
| (\pnt) \
| ((\nspb) << 5) \
| (((\ioff) & 7) << 10) \
| (((\ioff) & 0x1f8) << 13)
.endm
.macro _zrdvl nspd, is1
_check_reg \nspd
_check_num (\is1), -0x20, 0x1f
.inst 0x04bf5000 \
| (\nspd) \
| (((\is1) & 0x3f) << 5)
.endm
.macro _zrdffr pnd
_check_preg \pnd
.inst 0x2519f000 \
| (\pnd)
.endm
.macro _zwrffr pnd
_check_preg \pnd
.inst 0x25289000 \
| ((\pnd) << 5)
.endm
.macro for from, to, insn
.if (\from) >= (\to)
\insn (\from)
.exitm
.endif
for \from, ((\from) + (\to)) / 2, \insn
for ((\from) + (\to)) / 2 + 1, \to, \insn
.endm
.macro sve_save nb, xpfpsr, ntmp
.macro savez n
_zstrv \n, \nb, (\n) - 34
.endm
.macro savep n
_zstrp \n, \nb, (\n) - 16
.endm
for 0, 31, savez
for 0, 15, savep
_zrdffr 0
_zstrp 0, \nb
_zldrp 0, \nb, -16
mrs x\ntmp, fpsr
str w\ntmp, [\xpfpsr]
mrs x\ntmp, fpcr
str w\ntmp, [\xpfpsr, #4]
.purgem savez
.purgem savep
.endm
.macro sve_load nb, xpfpsr, xvqminus1 ntmp
mrs_s x\ntmp, SYS_ZCR_EL1
bic x\ntmp, x\ntmp, ZCR_EL1_LEN_MASK
orr x\ntmp, x\ntmp, \xvqminus1
msr_s SYS_ZCR_EL1, x\ntmp // self-synchronising
.macro loadz n
_zldrv \n, \nb, (\n) - 34
.endm
.macro loadp n
_zldrp \n, \nb, (\n) - 16
.endm
for 0, 31, loadz
_zldrp 0, \nb
_zwrffr 0
for 0, 15, loadp
ldr w\ntmp, [\xpfpsr]
msr fpsr, x\ntmp
ldr w\ntmp, [\xpfpsr, #4]
msr fpcr, x\ntmp
.purgem loadz
.purgem loadp
.endm

View File

@ -0,0 +1,92 @@
/* hw_breakpoint.h COPYRIGHT FUJITSU LIMITED 2016 */
#ifndef __HEADER_ARM64_COMMON_HW_BREAKPOINT_H
#define __HEADER_ARM64_COMMON_HW_BREAKPOINT_H
#include <ihk/types.h>
int hw_breakpoint_slots(int type);
unsigned long read_wb_reg(int reg, int n);
void write_wb_reg(int reg, int n, unsigned long val);
void hw_breakpoint_reset(void);
void arch_hw_breakpoint_init(void);
struct user_hwdebug_state;
int arch_validate_hwbkpt_settings(long note_type, struct user_hwdebug_state *hws, size_t len);
extern int core_num_brps;
extern int core_num_wrps;
/* @ref.impl include/uapi/linux/hw_breakpoint.h::HW_BREAKPOINT_LEN_n, HW_BREAKPOINT_xxx, bp_type_idx */
enum {
HW_BREAKPOINT_LEN_1 = 1,
HW_BREAKPOINT_LEN_2 = 2,
HW_BREAKPOINT_LEN_4 = 4,
HW_BREAKPOINT_LEN_8 = 8,
};
enum {
HW_BREAKPOINT_EMPTY = 0,
HW_BREAKPOINT_R = 1,
HW_BREAKPOINT_W = 2,
HW_BREAKPOINT_RW = HW_BREAKPOINT_R | HW_BREAKPOINT_W,
HW_BREAKPOINT_X = 4,
HW_BREAKPOINT_INVALID = HW_BREAKPOINT_RW | HW_BREAKPOINT_X,
};
enum bp_type_idx {
TYPE_INST = 0,
TYPE_DATA = 1,
TYPE_MAX
};
/* Breakpoint */
#define ARM_BREAKPOINT_EXECUTE 0
/* Watchpoints */
#define ARM_BREAKPOINT_LOAD 1
#define ARM_BREAKPOINT_STORE 2
#define AARCH64_ESR_ACCESS_MASK (1 << 6)
/* Privilege Levels */
#define AARCH64_BREAKPOINT_EL1 1
#define AARCH64_BREAKPOINT_EL0 2
/* Lengths */
#define ARM_BREAKPOINT_LEN_1 0x1
#define ARM_BREAKPOINT_LEN_2 0x3
#define ARM_BREAKPOINT_LEN_4 0xf
#define ARM_BREAKPOINT_LEN_8 0xff
/* @ref.impl arch/arm64/include/asm/hw_breakpoint.h::ARM_MAX_[BRP|WRP] */
/*
* Limits.
* Changing these will require modifications to the register accessors.
*/
#define ARM_MAX_BRP 16
#define ARM_MAX_WRP 16
/* @ref.impl arch/arm64/include/asm/hw_breakpoint.h::AARCH64_DBG_REG_xxx */
/* Virtual debug register bases. */
#define AARCH64_DBG_REG_BVR 0
#define AARCH64_DBG_REG_BCR (AARCH64_DBG_REG_BVR + ARM_MAX_BRP)
#define AARCH64_DBG_REG_WVR (AARCH64_DBG_REG_BCR + ARM_MAX_BRP)
#define AARCH64_DBG_REG_WCR (AARCH64_DBG_REG_WVR + ARM_MAX_WRP)
/* @ref.impl arch/arm64/include/asm/hw_breakpoint.h::AARCH64_DBG_REG_NAME_xxx */
/* Debug register names. */
#define AARCH64_DBG_REG_NAME_BVR "bvr"
#define AARCH64_DBG_REG_NAME_BCR "bcr"
#define AARCH64_DBG_REG_NAME_WVR "wvr"
#define AARCH64_DBG_REG_NAME_WCR "wcr"
/* @ref.impl arch/arm64/include/asm/hw_breakpoint.h::AARCH64_DBG_[READ|WRITE] */
/* Accessor macros for the debug registers. */
#define AARCH64_DBG_READ(N, REG, VAL) do {\
asm volatile("mrs %0, dbg" REG #N "_el1" : "=r" (VAL));\
} while (0)
#define AARCH64_DBG_WRITE(N, REG, VAL) do {\
asm volatile("msr dbg" REG #N "_el1, %0" :: "r" (VAL));\
} while (0)
#endif /* !__HEADER_ARM64_COMMON_HW_BREAKPOINT_H */

View File

@ -0,0 +1,28 @@
/* hwcap.h COPYRIGHT FUJITSU LIMITED 2017 */
#ifdef POSTK_DEBUG_ARCH_DEP_65
#ifndef _UAPI__ASM_HWCAP_H
#define _UAPI__ASM_HWCAP_H
/*
* HWCAP flags - for elf_hwcap (in kernel) and AT_HWCAP
*/
#define HWCAP_FP (1 << 0)
#define HWCAP_ASIMD (1 << 1)
#define HWCAP_EVTSTRM (1 << 2)
#define HWCAP_AES (1 << 3)
#define HWCAP_PMULL (1 << 4)
#define HWCAP_SHA1 (1 << 5)
#define HWCAP_SHA2 (1 << 6)
#define HWCAP_CRC32 (1 << 7)
#define HWCAP_ATOMICS (1 << 8)
#define HWCAP_FPHP (1 << 9)
#define HWCAP_ASIMDHP (1 << 10)
#define HWCAP_CPUID (1 << 11)
#define HWCAP_ASIMDRDM (1 << 12)
#define HWCAP_SVE (1 << 13)
unsigned long arch_get_hwcap(void);
extern unsigned long elf_hwcap;
#endif /* _UAPI__ASM_HWCAP_H */
#endif /* POSTK_DEBUG_ARCH_DEP_65 */

View File

@ -0,0 +1,363 @@
/* atomic.h COPYRIGHT FUJITSU LIMITED 2015-2016 */
#ifndef __HEADER_ARM64_IHK_ATOMIC_H
#define __HEADER_ARM64_IHK_ATOMIC_H
#include <arch/cpu.h>
/***********************************************************************
* ihk_atomic_t
*/
typedef struct {
int counter;
} ihk_atomic_t;
#define IHK_ATOMIC_INIT(i) { (i) }
static inline int ihk_atomic_read(const ihk_atomic_t *v)
{
return (*(volatile int *)&(v)->counter);
}
static inline void ihk_atomic_set(ihk_atomic_t *v, int i)
{
v->counter = i;
}
/* @ref.impl arch/arm64/include/asm/atomic.h::atomic_add (atomic_##op) */
static inline void ihk_atomic_add(int i, ihk_atomic_t *v)
{
unsigned long tmp;
int result;
asm volatile("// atomic_add\n"
"1: ldxr %w0, %2\n"
" add %w0, %w0, %w3\n"
" stxr %w1, %w0, %2\n"
" cbnz %w1, 1b"
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
: "Ir" (i));
}
/* @ref.impl arch/arm64/include/asm/atomic.h::atomic_sub (atomic_##op) */
static inline void ihk_atomic_sub(int i, ihk_atomic_t *v)
{
unsigned long tmp;
int result;
asm volatile("// atomic_sub\n"
"1: ldxr %w0, %2\n"
" sub %w0, %w0, %w3\n"
" stxr %w1, %w0, %2\n"
" cbnz %w1, 1b"
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
: "Ir" (i));
}
/* @ref.impl arch/arm64/include/asm/atomic.h::atomic_inc */
#define ihk_atomic_inc(v) ihk_atomic_add(1, v)
/* @ref.impl arch/arm64/include/asm/atomic.h::atomic_dec */
#define ihk_atomic_dec(v) ihk_atomic_sub(1, v)
/* @ref.impl arch/arm64/include/asm/atomic.h::atomic_add_return (atomic_##op##_return) */
static inline int ihk_atomic_add_return(int i, ihk_atomic_t *v)
{
unsigned long tmp;
int result;
asm volatile("// atomic_add_return\n"
"1: ldxr %w0, %2\n"
" add %w0, %w0, %w3\n"
" stlxr %w1, %w0, %2\n"
" cbnz %w1, 1b"
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
: "Ir" (i)
: "memory");
smp_mb();
return result;
}
/* @ref.impl arch/arm64/include/asm/atomic.h::atomic_sub_return (atomic_##op##_return) */
static inline int ihk_atomic_sub_return(int i, ihk_atomic_t *v)
{
unsigned long tmp;
int result;
asm volatile("// atomic_sub_return\n"
"1: ldxr %w0, %2\n"
" sub %w0, %w0, %w3\n"
" stlxr %w1, %w0, %2\n"
" cbnz %w1, 1b"
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter)
: "Ir" (i)
: "memory");
smp_mb();
return result;
}
/* @ref.impl arch/arm64/include/asm/atomic.h::atomic_inc_and_test */
#define ihk_atomic_inc_and_test(v) (ihk_atomic_add_return(1, v) == 0)
/* @ref.impl arch/arm64/include/asm/atomic.h::atomic_dec_and_test */
#define ihk_atomic_dec_and_test(v) (ihk_atomic_sub_return(1, v) == 0)
/* @ref.impl arch/arm64/include/asm/atomic.h::atomic_inc_return */
#define ihk_atomic_inc_return(v) (ihk_atomic_add_return(1, v))
/* @ref.impl arch/arm64/include/asm/atomic.h::atomic_dec_return */
#define ihk_atomic_dec_return(v) (ihk_atomic_sub_return(1, v))
/***********************************************************************
* ihk_atomic64_t
*/
typedef struct {
long counter64;
} ihk_atomic64_t;
#define IHK_ATOMIC64_INIT(i) { .counter64 = (i) }
static inline long ihk_atomic64_read(const ihk_atomic64_t *v)
{
return *(volatile long *)&(v)->counter64;
}
static inline void ihk_atomic64_set(ihk_atomic64_t *v, int i)
{
v->counter64 = i;
}
/* @ref.impl arch/arm64/include/asm/atomic.h::atomic64_add (atomic64_##op) */
static inline void ihk_atomic64_add(long i, ihk_atomic64_t *v)
{
long result;
unsigned long tmp;
asm volatile("// atomic64_add\n"
"1: ldxr %0, %2\n"
" add %0, %0, %3\n"
" stxr %w1, %0, %2\n"
" cbnz %w1, 1b"
: "=&r" (result), "=&r" (tmp), "+Q" (v->counter64)
: "Ir" (i));
}
/* @ref.impl arch/arm64/include/asm/atomic.h::atomic64_inc */
#define ihk_atomic64_inc(v) ihk_atomic64_add(1LL, (v))
/***********************************************************************
* others
*/
/* @ref.impl arch/arm64/include/asm/cmpxchg.h::__xchg */
static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
{
unsigned long ret = 0, tmp;
switch (size) {
case 1:
asm volatile("// __xchg1\n"
"1: ldxrb %w0, %2\n"
" stlxrb %w1, %w3, %2\n"
" cbnz %w1, 1b\n"
: "=&r" (ret), "=&r" (tmp), "+Q" (*(unsigned char *)ptr)
: "r" (x)
: "memory");
break;
case 2:
asm volatile("// __xchg2\n"
"1: ldxrh %w0, %2\n"
" stlxrh %w1, %w3, %2\n"
" cbnz %w1, 1b\n"
: "=&r" (ret), "=&r" (tmp), "+Q" (*(unsigned short *)ptr)
: "r" (x)
: "memory");
break;
case 4:
asm volatile("// __xchg4\n"
"1: ldxr %w0, %2\n"
" stlxr %w1, %w3, %2\n"
" cbnz %w1, 1b\n"
: "=&r" (ret), "=&r" (tmp), "+Q" (*(unsigned int *)ptr)
: "r" (x)
: "memory");
break;
case 8:
asm volatile("// __xchg8\n"
"1: ldxr %0, %2\n"
" stlxr %w1, %3, %2\n"
" cbnz %w1, 1b\n"
: "=&r" (ret), "=&r" (tmp), "+Q" (*(unsigned long *)ptr)
: "r" (x)
: "memory");
break;
/*
default:
BUILD_BUG();
*/
}
smp_mb();
return ret;
}
/* @ref.impl arch/arm64/include/asm/cmpxchg.h::xchg */
#define xchg(ptr,x) \
({ \
__typeof__(*(ptr)) __ret; \
__ret = (__typeof__(*(ptr))) \
__xchg((unsigned long)(x), (ptr), sizeof(*(ptr))); \
__ret; \
})
#define xchg4(ptr, x) xchg(ptr,x)
#define xchg8(ptr, x) xchg(ptr,x)
/* @ref.impl arch/arm64/include/asm/cmpxchg.h::__cmpxchg */
static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
unsigned long new, int size)
{
unsigned long oldval = 0, res;
switch (size) {
case 1:
do {
asm volatile("// __cmpxchg1\n"
" ldxrb %w1, %2\n"
" mov %w0, #0\n"
" cmp %w1, %w3\n"
" b.ne 1f\n"
" stxrb %w0, %w4, %2\n"
"1:\n"
: "=&r" (res), "=&r" (oldval), "+Q" (*(unsigned char *)ptr)
: "Ir" (old), "r" (new) : "cc");
} while (res);
break;
case 2:
do {
asm volatile("// __cmpxchg2\n"
" ldxrh %w1, %2\n"
" mov %w0, #0\n"
" cmp %w1, %w3\n"
" b.ne 1f\n"
" stxrh %w0, %w4, %2\n"
"1:\n"
: "=&r" (res), "=&r" (oldval), "+Q" (*(unsigned short *)ptr)
: "Ir" (old), "r" (new)
: "cc");
} while (res);
break;
case 4:
do {
asm volatile("// __cmpxchg4\n"
" ldxr %w1, %2\n"
" mov %w0, #0\n"
" cmp %w1, %w3\n"
" b.ne 1f\n"
" stxr %w0, %w4, %2\n"
"1:\n"
: "=&r" (res), "=&r" (oldval), "+Q" (*(unsigned int *)ptr)
: "Ir" (old), "r" (new)
: "cc");
} while (res);
break;
case 8:
do {
asm volatile("// __cmpxchg8\n"
" ldxr %1, %2\n"
" mov %w0, #0\n"
" cmp %1, %3\n"
" b.ne 1f\n"
" stxr %w0, %4, %2\n"
"1:\n"
: "=&r" (res), "=&r" (oldval), "+Q" (*(unsigned long *)ptr)
: "Ir" (old), "r" (new)
: "cc");
} while (res);
break;
/*
default:
BUILD_BUG();
*/
}
return oldval;
}
/* @ref.impl arch/arm64/include/asm/cmpxchg.h::__cmpxchg_mb */
static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old,
unsigned long new, int size)
{
unsigned long ret;
smp_mb();
ret = __cmpxchg(ptr, old, new, size);
smp_mb();
return ret;
}
/* @ref.impl arch/arm64/include/asm/cmpxchg.h::cmpxchg */
#define cmpxchg(ptr, o, n) \
({ \
__typeof__(*(ptr)) __ret; \
__ret = (__typeof__(*(ptr))) \
__cmpxchg_mb((ptr), (unsigned long)(o), (unsigned long)(n), \
sizeof(*(ptr))); \
__ret; \
})
#define atomic_cmpxchg4(ptr, o, n) cmpxchg(ptr,o,n)
#define atomic_cmpxchg8(ptr, o, n) cmpxchg(ptr,o,n)
static inline void ihk_atomic_add_long(long i, long *v)
{
long result;
unsigned long tmp;
asm volatile("// atomic64_add\n"
"1: ldxr %0, %2\n"
" add %0, %0, %3\n"
" stxr %w1, %0, %2\n"
" cbnz %w1, 1b"
: "=&r" (result), "=&r" (tmp), "+Q" (*v)
: "Ir" (i));
}
static inline void ihk_atomic_add_ulong(long i, unsigned long *v)
{
long result;
unsigned long tmp;
asm volatile("// atomic64_add\n"
"1: ldxr %0, %2\n"
" add %0, %0, %3\n"
" stxr %w1, %0, %2\n"
" cbnz %w1, 1b"
: "=&r" (result), "=&r" (tmp), "+Q" (*v)
: "Ir" (i));
}
static inline unsigned long ihk_atomic_add_long_return(long i, long *v)
{
unsigned long result;
unsigned long tmp;
asm volatile("// atomic64_add_return\n"
"1: ldxr %0, %2\n"
" add %0, %0, %3\n"
" stlxr %w1, %0, %2\n"
" cbnz %w1, 1b"
: "=&r" (result), "=&r" (tmp), "+Q" (*v)
: "Ir" (i)
: "memory");
smp_mb();
return result;
}
#endif /* !__HEADER_ARM64_COMMON_IHK_ATOMIC_H */

View File

@ -0,0 +1,81 @@
/* context.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
#ifndef __HEADER_ARM64_IHK_CONTEXT_H
#define __HEADER_ARM64_IHK_CONTEXT_H
#include <registers.h>
struct thread_info;
typedef struct {
struct thread_info *thread;
} ihk_mc_kernel_context_t;
struct user_pt_regs {
unsigned long regs[31];
unsigned long sp;
unsigned long pc;
unsigned long pstate;
};
struct pt_regs {
union {
struct user_pt_regs user_regs;
struct {
unsigned long regs[31];
unsigned long sp;
unsigned long pc;
unsigned long pstate;
};
};
unsigned long orig_x0;
unsigned long syscallno;
};
typedef struct pt_regs ihk_mc_user_context_t;
/* @ref.impl arch/arm64/include/asm/ptrace.h */
#define GET_IP(regs) ((unsigned long)(regs)->pc)
#define SET_IP(regs, value) ((regs)->pc = ((uint64_t) (value)))
/* @ref.impl arch/arm64/include/asm/ptrace.h */
/* AArch32 CPSR bits */
#define COMPAT_PSR_MODE_MASK 0x0000001f
/* @ref.impl include/asm-generic/ptrace.h */
static inline unsigned long instruction_pointer(struct pt_regs *regs)
{
return GET_IP(regs);
}
/* @ref.impl include/asm-generic/ptrace.h */
static inline void instruction_pointer_set(struct pt_regs *regs,
unsigned long val)
{
SET_IP(regs, val);
}
/* @ref.impl arch/arm64/include/asm/ptrace.h */
/*
* Write a register given an architectural register index r.
* This handles the common case where 31 means XZR, not SP.
*/
static inline void pt_regs_write_reg(struct pt_regs *regs, int r,
unsigned long val)
{
if (r != 31)
regs->regs[r] = val;
}
/* temp */
#define ihk_mc_syscall_arg0(uc) (uc)->regs[0]
#define ihk_mc_syscall_arg1(uc) (uc)->regs[1]
#define ihk_mc_syscall_arg2(uc) (uc)->regs[2]
#define ihk_mc_syscall_arg3(uc) (uc)->regs[3]
#define ihk_mc_syscall_arg4(uc) (uc)->regs[4]
#define ihk_mc_syscall_arg5(uc) (uc)->regs[5]
#define ihk_mc_syscall_ret(uc) (uc)->regs[0]
#define ihk_mc_syscall_number(uc) (uc)->regs[8]
#define ihk_mc_syscall_pc(uc) (uc)->pc
#define ihk_mc_syscall_sp(uc) (uc)->sp
#endif /* !__HEADER_ARM64_IHK_CONTEXT_H */

View File

@ -0,0 +1,14 @@
/* ikc.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef __HEADER_ARM64_IHK_IKC_H
#define __HEADER_ARM64_IHK_IKC_H
#include <ikc/ihk.h>
#define IKC_PORT_IKC2MCKERNEL 501
#define IKC_PORT_IKC2LINUX 503
/* manycore side */
int ihk_mc_ikc_init_first(struct ihk_ikc_channel_desc *,
ihk_ikc_ph_t handler);
#endif /* !__HEADER_ARM64_IHK_IKC_H */

View File

@ -0,0 +1,35 @@
/* types.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
#ifndef __HEADER_ARM64_IHK_TYPES_H
#define __HEADER_ARM64_IHK_TYPES_H
#ifndef __ASSEMBLY__
typedef unsigned char uint8_t;
typedef unsigned short uint16_t;
typedef unsigned int uint32_t;
typedef unsigned long long uint64_t;
typedef signed char int8_t;
typedef signed short int16_t;
typedef signed int int32_t;
typedef signed long long int64_t;
typedef int64_t ptrdiff_t;
typedef int64_t intptr_t;
typedef uint64_t uintptr_t;
typedef uint64_t size_t;
typedef int64_t ssize_t;
typedef int64_t off_t;
#ifdef POSTK_DEBUG_ARCH_DEP_18 /* coredump arch separation. */
typedef int32_t key_t;
typedef uint32_t uid_t;
typedef uint32_t gid_t;
typedef int64_t time_t;
typedef int32_t pid_t;
#endif /* POSTK_DEBUG_ARCH_DEP_18 */
#endif /* __ASSEMBLY__ */
#define NULL ((void *)0)
#endif /* !__HEADER_ARM64_IHK_TYPES_H */

View File

@ -0,0 +1,99 @@
/* io.h COPYRIGHT FUJITSU LIMITED 2015 */
/*
* Based on arch/arm/include/asm/io.h
*
* Copyright (C) 1996-2000 Russell King
* Copyright (C) 2012 ARM Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ASM_IO_H
#define __ASM_IO_H
#include <ihk/types.h>
#ifdef __KERNEL__
/*
* Generic IO read/write. These perform native-endian accesses.
*/
static inline void __raw_writeb(uint8_t val, volatile void *addr)
{
asm volatile("strb %w0, [%1]" : : "r" (val), "r" (addr));
}
static inline void __raw_writew(uint16_t val, volatile void *addr)
{
asm volatile("strh %w0, [%1]" : : "r" (val), "r" (addr));
}
static inline void __raw_writel(uint32_t val, volatile void *addr)
{
asm volatile("str %w0, [%1]" : : "r" (val), "r" (addr));
}
static inline void __raw_writeq(uint64_t val, volatile void *addr)
{
asm volatile("str %0, [%1]" : : "r" (val), "r" (addr));
}
static inline uint8_t __raw_readb(const volatile void *addr)
{
uint8_t val;
asm volatile("ldarb %w0, [%1]"
: "=r" (val) : "r" (addr));
return val;
}
static inline uint16_t __raw_readw(const volatile void *addr)
{
uint16_t val;
asm volatile("ldarh %w0, [%1]"
: "=r" (val) : "r" (addr));
return val;
}
static inline uint32_t __raw_readl(const volatile void *addr)
{
uint32_t val;
asm volatile("ldar %w0, [%1]"
: "=r" (val) : "r" (addr));
return val;
}
static inline uint64_t __raw_readq(const volatile void *addr)
{
uint64_t val;
asm volatile("ldar %0, [%1]"
: "=r" (val) : "r" (addr));
return val;
}
/*
* Relaxed I/O memory access primitives. These follow the Device memory
* ordering rules but do not guarantee any ordering relative to Normal memory
* accesses.
*/
#define readb_relaxed(c) ({ uint8_t __v = (uint8_t)__raw_readb(c); __v; })
#define readw_relaxed(c) ({ uint16_t __v = (uint16_t)__raw_readw(c); __v; })
#define readl_relaxed(c) ({ uint32_t __v = (uint32_t)__raw_readl(c); __v; })
#define readq_relaxed(c) ({ uint64_t __v = (uint64_t)__raw_readq(c); __v; })
#define writeb_relaxed(v,c) ((void)__raw_writeb((uint8_t)(v),(c)))
#define writew_relaxed(v,c) ((void)__raw_writew((uint16_t)(v),(c)))
#define writel_relaxed(v,c) ((void)__raw_writel((uint32_t)(v),(c)))
#define writeq_relaxed(v,c) ((void)__raw_writeq((uint64_t)(v),(c)))
#endif /* __KERNEL__ */
#endif /* __ASM_IO_H */

View File

@ -0,0 +1,70 @@
/* irq.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
#ifndef __HEADER_ARM64_IRQ_H
#define __HEADER_ARM64_IRQ_H
#include <ihk/debug.h>
#include <ihk/context.h>
#include <sysreg.h>
#include <cputype.h>
/* use SGI interrupt number */
#define INTRID_CPU_NOTIFY 0
#define INTRID_IKC 1
#define INTRID_QUERY_FREE_MEM 2
#define INTRID_CPU_STOP 3
#define INTRID_TLB_FLUSH 4
#define INTRID_STACK_TRACE 6
#define INTRID_MEMDUMP 7
/* use PPI interrupt number */
#define INTRID_HYP_PHYS_TIMER 26 /* cnthp */
#define INTRID_VIRT_TIMER 27 /* cntv */
#define INTRID_HYP_VIRT_TIMER 28 /* cnthv */
#define INTRID_PHYS_TIMER 30 /* cntp */
/* timer intrid getter */
static int get_virt_timer_intrid(void)
{
#ifdef CONFIG_ARM64_VHE
unsigned long mmfr = read_cpuid(ID_AA64MMFR1_EL1);
if ((mmfr >> ID_AA64MMFR1_VHE_SHIFT) & 1UL) {
return INTRID_HYP_VIRT_TIMER;
}
#endif /* CONFIG_ARM64_VHE */
return INTRID_VIRT_TIMER;
}
static int get_phys_timer_intrid(void)
{
#ifdef CONFIG_ARM64_VHE
unsigned long mmfr = read_cpuid(ID_AA64MMFR1_EL1);
if ((mmfr >> ID_AA64MMFR1_VHE_SHIFT) & 1UL) {
return INTRID_HYP_PHYS_TIMER;
}
#endif /* CONFIG_ARM64_VHE */
return INTRID_PHYS_TIMER;
}
/* use timer checker */
extern unsigned long is_use_virt_timer(void);
/* Functions for GICv2 */
extern void gic_dist_init_gicv2(unsigned long dist_base_pa, unsigned long size);
extern void gic_cpu_init_gicv2(unsigned long cpu_base_pa, unsigned long size);
extern void gic_enable_gicv2(void);
extern void arm64_issue_ipi_gicv2(unsigned int cpuid, unsigned int vector);
extern void handle_interrupt_gicv2(struct pt_regs *regs);
/* Functions for GICv3 */
extern void gic_dist_init_gicv3(unsigned long dist_base_pa, unsigned long size);
extern void gic_cpu_init_gicv3(unsigned long cpu_base_pa, unsigned long size);
extern void gic_enable_gicv3(void);
extern void arm64_issue_ipi_gicv3(unsigned int cpuid, unsigned int vector);
extern void handle_interrupt_gicv3(struct pt_regs *regs);
void handle_IPI(unsigned int vector, struct pt_regs *regs);
#endif /* __HEADER_ARM64_IRQ_H */

View File

@ -0,0 +1,31 @@
/* irqflags.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
#ifndef __HEADER_ARM64_COMMON_IRQFLAGS_H
#define __HEADER_ARM64_COMMON_IRQFLAGS_H
#include <ptrace.h>
/*
* save and restore debug state
*/
static inline unsigned long local_dbg_save(void)
{
unsigned long flags;
asm volatile(
"mrs %0, daif // local_dbg_save\n"
"msr daifset, #8"
: "=r" (flags)
:
: "memory");
return flags;
}
static inline void local_dbg_restore(unsigned long flags)
{
asm volatile(
"msr daif, %0 // local_dbg_restore"
:
: "r" (flags)
: "memory");
}
#endif /* !__HEADER_ARM64_COMMON_IRQFLAGS_H */

View File

@ -0,0 +1,25 @@
/* linkage.h COPYRIGHT FUJITSU LIMITED 2015-2016 */
#ifndef __HEADER_ARM64_COMMON_LINKAGE_H
#define __HEADER_ARM64_COMMON_LINKAGE_H
#include <arch-memory.h>
#include <compiler.h>
#define ASM_NL ;
#define __ALIGN .align 4
#define __ALIGN_STR ".align 4"
#define ENTRY(name) \
.globl name ASM_NL \
__ALIGN ASM_NL \
name:
#define END(name) \
.size name, .-name
#define ENDPROC(name) \
.type name, @function ASM_NL \
END(name)
#endif /* !__HEADER_ARM64_COMMON_LINKAGE_H */

View File

@ -0,0 +1,22 @@
/* mmu_context.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef __HEADER_ARM64_COMMON_MMU_CONTEXT_H
#define __HEADER_ARM64_COMMON_MMU_CONTEXT_H
#include <pgtable.h>
#include <memory.h>
/*
* Set TTBR0 to empty_zero_page. No translations will be possible via TTBR0.
*/
static inline void cpu_set_reserved_ttbr0(void)
{
unsigned long ttbr = virt_to_phys(empty_zero_page);
asm(
" msr ttbr0_el1, %0 // set TTBR0\n"
" isb"
:
: "r" (ttbr));
}
#endif /* !__HEADER_ARM64_COMMON_MMU_CONTEXT_H */

View File

@ -0,0 +1,196 @@
/* pgtable-hwdef.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef __HEADER_ARM64_COMMON_PGTABLE_HWDEF_H
#define __HEADER_ARM64_COMMON_PGTABLE_HWDEF_H
#ifndef __HEADER_ARM64_COMMON_ARCH_MEMORY_H
# error arch-memory.h
#endif
#define PTRS_PER_PTE (1 << (PAGE_SHIFT - 3))
/*
* PMD_SHIFT determines the size a level 2 page table entry can map.
*/
#if CONFIG_ARM64_PGTABLE_LEVELS > 2
# define PMD_SHIFT ((PAGE_SHIFT - 3) * 2 + 3)
# define PMD_SIZE (1UL << PMD_SHIFT)
# define PMD_MASK (~(PMD_SIZE-1))
# define PTRS_PER_PMD PTRS_PER_PTE
#endif
/*
* PUD_SHIFT determines the size a level 1 page table entry can map.
*/
#if CONFIG_ARM64_PGTABLE_LEVELS > 3
# define PUD_SHIFT ((PAGE_SHIFT - 3) * 3 + 3)
# define PUD_SIZE (1UL << PUD_SHIFT)
# define PUD_MASK (~(PUD_SIZE-1))
# define PTRS_PER_PUD PTRS_PER_PTE
#endif
/*
* PGDIR_SHIFT determines the size a top-level page table entry can map
* (depending on the configuration, this level can be 0, 1 or 2).
*/
#define PGDIR_SHIFT ((PAGE_SHIFT - 3) * CONFIG_ARM64_PGTABLE_LEVELS + 3)
#define PGDIR_SIZE (_AC(1, UL) << PGDIR_SHIFT)
#define PGDIR_MASK (~(PGDIR_SIZE-1))
#define PTRS_PER_PGD (1 << (VA_BITS - PGDIR_SHIFT))
/*
* Section address mask and size definitions.
*/
#define SECTION_SHIFT PMD_SHIFT
#define SECTION_SIZE (UL(1) << SECTION_SHIFT)
#define SECTION_MASK (~(SECTION_SIZE-1))
/*
* Level 2 descriptor (PMD).
*/
#define PMD_TYPE_MASK (UL(3) << 0)
#define PMD_TYPE_FAULT (UL(0) << 0)
#define PMD_TYPE_TABLE (UL(3) << 0)
#define PMD_TYPE_SECT (UL(1) << 0)
#define PMD_TABLE_BIT (UL(1) << 1)
/*
* Table (D_Block)
*/
#define PMD_TBL_PXNT (UL(1) << 59)
#define PMD_TBL_UXNT (UL(1) << 60)
#define PMD_TBL_APT_USER (UL(1) << 61) /* 0:Access at EL0 permitted, 1:Access at EL0 not permitted */
#define PMD_TBL_APT_RDONLY (UL(2) << 61) /* 0:read write(EL0-3) 0:read only(EL0-3) */
#define PMD_TBL_NST (UL(1) << 63) /* 0:secure, 1:non-secure */
/*
* Section (D_Page)
*/
#define PMD_SECT_VALID (UL(1) << 0)
#define PMD_SECT_PROT_NONE (UL(1) << 58)
#define PMD_SECT_USER (UL(1) << 6) /* AP[1] */
#define PMD_SECT_RDONLY (UL(1) << 7) /* AP[2] */
#define PMD_SECT_S (UL(3) << 8)
#define PMD_SECT_AF (UL(1) << 10)
#define PMD_SECT_NG (UL(1) << 11)
#define PMD_SECT_PXN (UL(1) << 53)
#define PMD_SECT_UXN (UL(1) << 54)
/*
* AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
*/
#define PMD_ATTRINDX(t) (UL(t) << 2)
#define PMD_ATTRINDX_MASK (UL(7) << 2)
/*
* Level 3 descriptor (PTE).
*/
#define PTE_TYPE_MASK (UL(3) << 0)
#define PTE_TYPE_FAULT (UL(0) << 0)
#define PTE_TYPE_PAGE (UL(3) << 0)
#define PTE_TABLE_BIT (UL(1) << 1)
#define PTE_USER (UL(1) << 6) /* AP[1] */
#define PTE_RDONLY (UL(1) << 7) /* AP[2] */
#define PTE_SHARED (UL(3) << 8) /* SH[1:0], inner shareable */
#define PTE_AF (UL(1) << 10) /* Access Flag */
#define PTE_NG (UL(1) << 11) /* nG */
#define PTE_PXN (UL(1) << 53) /* Privileged XN */
#define PTE_UXN (UL(1) << 54) /* User XN */
/* Software defined PTE bits definition.*/
#define PTE_VALID (UL(1) << 0)
#define PTE_FILE (UL(1) << 2) /* only when !pte_present() */
#define PTE_DIRTY (UL(1) << 55)
#define PTE_SPECIAL (UL(1) << 56)
#define PTE_WRITE (UL(1) << 57)
#define PTE_PROT_NONE (UL(1) << 58) /* only when !PTE_VALID */
/*
* AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
*/
#define PTE_ATTRINDX(t) (UL(t) << 2)
#define PTE_ATTRINDX_MASK (UL(7) << 2)
/*
* Highest possible physical address supported.
*/
#define PHYS_MASK_SHIFT (48)
#define PHYS_MASK (((UL(1) << PHYS_MASK_SHIFT) - 1) & PAGE_MASK)
/*
* TCR flags.
*/
#define TCR_TxSZ(x) (((UL(64) - (x)) << 16) | ((UL(64) - (x)) << 0))
#define TCR_IRGN_NC ((UL(0) << 8) | (UL(0) << 24))
#define TCR_IRGN_WBWA ((UL(1) << 8) | (UL(1) << 24))
#define TCR_IRGN_WT ((UL(2) << 8) | (UL(2) << 24))
#define TCR_IRGN_WBnWA ((UL(3) << 8) | (UL(3) << 24))
#define TCR_IRGN_MASK ((UL(3) << 8) | (UL(3) << 24))
#define TCR_ORGN_NC ((UL(0) << 10) | (UL(0) << 26))
#define TCR_ORGN_WBWA ((UL(1) << 10) | (UL(1) << 26))
#define TCR_ORGN_WT ((UL(2) << 10) | (UL(2) << 26))
#define TCR_ORGN_WBnWA ((UL(3) << 10) | (UL(3) << 26))
#define TCR_ORGN_MASK ((UL(3) << 10) | (UL(3) << 26))
#define TCR_SHARED ((UL(3) << 12) | (UL(3) << 28))
#define TCR_TG0_4K (UL(0) << 14)
#define TCR_TG0_64K (UL(1) << 14)
#define TCR_TG0_16K (UL(2) << 14)
#define TCR_TG1_16K (UL(1) << 30)
#define TCR_TG1_4K (UL(2) << 30)
#define TCR_TG1_64K (UL(3) << 30)
#define TCR_ASID16 (UL(1) << 36)
#define TCR_TBI0 (UL(1) << 37)
/*
* Memory types available.
*/
#define MT_DEVICE_nGnRnE 0
#define MT_DEVICE_nGnRE 1
#define MT_DEVICE_GRE 2
#define MT_NORMAL_NC 3
#define MT_NORMAL 4
/*
* page table entry attribute set.
*/
#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
#define PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_DEVICE_nGnRE))
#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_NORMAL_NC))
#define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_NORMAL))
#define PROT_SECT_DEVICE_nGnRE (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_DEVICE_nGnRE))
#define PROT_SECT_NORMAL (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
#define PROT_SECT_NORMAL_EXEC (PROT_SECT_DEFAULT | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
#define _PAGE_DEFAULT (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
#define PAGE_KERNEL (_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE)
#define PAGE_KERNEL_EXEC (_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE)
#define PAGE_NONE (((_PAGE_DEFAULT) & ~PTE_TYPE_MASK) | PTE_PROT_NONE | PTE_PXN | PTE_UXN)
#define PAGE_SHARED (_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
#define PAGE_SHARED_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_WRITE)
#define PAGE_COPY (_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN)
#define PAGE_COPY_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN)
#define PAGE_READONLY (_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN)
#define PAGE_READONLY_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN)
#define __P000 PAGE_NONE
#define __P001 PAGE_READONLY
#define __P010 PAGE_COPY
#define __P011 PAGE_COPY
#define __P100 PAGE_READONLY_EXEC
#define __P101 PAGE_READONLY_EXEC
#define __P110 PAGE_COPY_EXEC
#define __P111 PAGE_COPY_EXEC
#define __S000 PAGE_NONE
#define __S001 PAGE_READONLY
#define __S010 PAGE_SHARED
#define __S011 PAGE_SHARED
#define __S100 PAGE_READONLY_EXEC
#define __S101 PAGE_READONLY_EXEC
#define __S110 PAGE_SHARED_EXEC
#define __S111 PAGE_SHARED_EXEC
#endif /* !__HEADER_ARM64_COMMON_PGTABLE_HWDEF_H */

View File

@ -0,0 +1,7 @@
/* pgtable.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef __HEADER_ARM64_COMMON_PGTABLE_H
#define __HEADER_ARM64_COMMON_PGTABLE_H
extern char empty_zero_page[];
#endif /* !__HEADER_ARM64_COMMON_PGTABLE_H */

View File

@ -0,0 +1,17 @@
/* prctl.h COPYRIGHT FUJITSU LIMITED 2017 */
#ifndef __HEADER_ARM64_COMMON_PRCTL_H
#define __HEADER_ARM64_COMMON_PRCTL_H
/* arm64 Scalable Vector Extension controls */
#define PR_SVE_SET_VL 48 /* set task vector length */
#define PR_SVE_SET_VL_THREAD (1 << 1) /* set just this thread */
#define PR_SVE_SET_VL_INHERIT (1 << 2) /* inherit across exec */
#define PR_SVE_SET_VL_ONEXEC (1 << 3) /* defer effect until exec */
#define PR_SVE_GET_VL 49 /* get task vector length */
/* Decode helpers for the return value from PR_SVE_GET_VL: */
#define PR_SVE_GET_VL_LEN(ret) ((ret) & 0x3fff) /* vector length */
#define PR_SVE_GET_VL_INHERIT (PR_SVE_SET_VL_INHERIT << 16)
/* For conveinence, PR_SVE_SET_VL returns the result in the same encoding */
#endif /* !__HEADER_ARM64_COMMON_PRCTL_H */

View File

@ -0,0 +1,68 @@
/* psci.h COPYRIGHT FUJITSU LIMITED 2015-2016 */
/* @ref.impl include/uapi/linux/psci.h */
/*
* ARM Power State and Coordination Interface (PSCI) header
*
* This header holds common PSCI defines and macros shared
* by: ARM kernel, ARM64 kernel, KVM ARM/ARM64 and user space.
*
* Copyright (C) 2014 Linaro Ltd.
* Author: Anup Patel <anup.patel@linaro.org>
*/
#ifndef __HEADER_ARM64_PSCI_H
#define __HEADER_ARM64_PSCI_H
/*
* PSCI v0.1 interface
*
* The PSCI v0.1 function numbers are implementation defined.
*
* Only PSCI return values such as: SUCCESS, NOT_SUPPORTED,
* INVALID_PARAMS, and DENIED defined below are applicable
* to PSCI v0.1.
*/
/* PSCI v0.2 interface */
#define PSCI_0_2_FN_BASE 0x84000000
#define PSCI_0_2_FN(n) (PSCI_0_2_FN_BASE + (n))
#define PSCI_0_2_64BIT 0x40000000
#define PSCI_0_2_FN64_BASE (PSCI_0_2_FN_BASE + PSCI_0_2_64BIT)
#define PSCI_0_2_FN64(n) (PSCI_0_2_FN64_BASE + (n))
#define PSCI_0_2_FN_PSCI_VERSION PSCI_0_2_FN(0)
#define PSCI_0_2_FN_CPU_OFF PSCI_0_2_FN(2)
#define PSCI_0_2_FN64_CPU_ON PSCI_0_2_FN64(3)
#define PSCI_0_2_FN64_AFFINITY_INFO PSCI_0_2_FN64(4)
/* PSCI v0.2 power state encoding for CPU_SUSPEND function */
#define PSCI_0_2_POWER_STATE_ID_MASK 0xffff
#define PSCI_0_2_POWER_STATE_ID_SHIFT 0
#define PSCI_0_2_POWER_STATE_TYPE_SHIFT 16
#define PSCI_0_2_POWER_STATE_TYPE_MASK \
(0x1 << PSCI_0_2_POWER_STATE_TYPE_SHIFT)
#define PSCI_0_2_POWER_STATE_AFFL_SHIFT 24
#define PSCI_0_2_POWER_STATE_AFFL_MASK \
(0x3 << PSCI_0_2_POWER_STATE_AFFL_SHIFT)
/* PSCI version decoding (independent of PSCI version) */
#define PSCI_VERSION_MAJOR_SHIFT 16
#define PSCI_VERSION_MINOR_MASK \
((1U << PSCI_VERSION_MAJOR_SHIFT) - 1)
#define PSCI_VERSION_MAJOR_MASK ~PSCI_VERSION_MINOR_MASK
#define PSCI_VERSION_MAJOR(ver) \
(((ver) & PSCI_VERSION_MAJOR_MASK) >> PSCI_VERSION_MAJOR_SHIFT)
#define PSCI_VERSION_MINOR(ver) \
((ver) & PSCI_VERSION_MINOR_MASK)
/* PSCI return values (inclusive of all PSCI versions) */
#define PSCI_RET_SUCCESS 0
#define PSCI_RET_NOT_SUPPORTED -1
#define PSCI_RET_INVALID_PARAMS -2
#define PSCI_RET_DENIED -3
int psci_init(void);
int psci_cpu_off(void);
int cpu_psci_cpu_boot(unsigned int cpu, unsigned long pc);
#endif /* __HEADER_ARM64_PSCI_H */

View File

@ -0,0 +1,198 @@
/* ptrace.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
#ifndef __HEADER_ARM64_COMMON_PTRACE_H
#define __HEADER_ARM64_COMMON_PTRACE_H
/*
* PSR bits
*/
#define PSR_MODE_EL0t 0x00000000
#define PSR_MODE_EL1t 0x00000004
#define PSR_MODE_EL1h 0x00000005
#define PSR_MODE_EL2t 0x00000008
#define PSR_MODE_EL2h 0x00000009
#define PSR_MODE_EL3t 0x0000000c
#define PSR_MODE_EL3h 0x0000000d
#define PSR_MODE_MASK 0x0000000f
/* AArch32 CPSR bits */
#define PSR_MODE32_BIT 0x00000010
/* AArch64 SPSR bits */
#define PSR_F_BIT 0x00000040
#define PSR_I_BIT 0x00000080
#define PSR_A_BIT 0x00000100
#define PSR_D_BIT 0x00000200
#define PSR_Q_BIT 0x08000000
#define PSR_V_BIT 0x10000000
#define PSR_C_BIT 0x20000000
#define PSR_Z_BIT 0x40000000
#define PSR_N_BIT 0x80000000
/*
* Groups of PSR bits
*/
#define PSR_f 0xff000000 /* Flags */
#define PSR_s 0x00ff0000 /* Status */
#define PSR_x 0x0000ff00 /* Extension */
#define PSR_c 0x000000ff /* Control */
/* Current Exception Level values, as contained in CurrentEL */
#define CurrentEL_EL1 (1 << 2)
#define CurrentEL_EL2 (2 << 2)
/* thread->ptrace_debugreg lower-area and higher-area */
#define HWS_BREAK 0
#define HWS_WATCH 1
#ifndef __ASSEMBLY__
#include <ihk/types.h>
struct user_hwdebug_state {
uint32_t dbg_info;
uint32_t pad;
struct {
uint64_t addr;
uint32_t ctrl;
uint32_t pad;
} dbg_regs[16];
};
struct user_fpsimd_state {
__uint128_t vregs[32];
uint32_t fpsr;
uint32_t fpcr;
uint32_t __reserved[2];
};
extern unsigned int ptrace_hbp_get_resource_info(unsigned int note_type);
/* SVE/FP/SIMD state (NT_ARM_SVE) */
struct user_sve_header {
uint32_t size; /* total meaningful regset content in bytes */
uint32_t max_size; /* maxmium possible size for this thread */
uint16_t vl; /* current vector length */
uint16_t max_vl; /* maximum possible vector length */
uint16_t flags;
uint16_t __reserved;
};
/* Definitions for user_sve_header.flags: */
#define SVE_PT_REGS_MASK (1 << 0)
#define SVE_PT_REGS_FPSIMD 0
#define SVE_PT_REGS_SVE SVE_PT_REGS_MASK
#define SVE_PT_VL_THREAD PR_SVE_SET_VL_THREAD
#define SVE_PT_VL_INHERIT PR_SVE_SET_VL_INHERIT
#define SVE_PT_VL_ONEXEC PR_SVE_SET_VL_ONEXEC
/*
* The remainder of the SVE state follows struct user_sve_header. The
* total size of the SVE state (including header) depends on the
* metadata in the header: SVE_PT_SIZE(vq, flags) gives the total size
* of the state in bytes, including the header.
*
* Refer to <asm/sigcontext.h> for details of how to pass the correct
* "vq" argument to these macros.
*/
/* Offset from the start of struct user_sve_header to the register data */
#define SVE_PT_REGS_OFFSET ((sizeof(struct sve_context) + 15) / 16 * 16)
/*
* The register data content and layout depends on the value of the
* flags field.
*/
/*
* (flags & SVE_PT_REGS_MASK) == SVE_PT_REGS_FPSIMD case:
*
* The payload starts at offset SVE_PT_FPSIMD_OFFSET, and is of type
* struct user_fpsimd_state. Additional data might be appended in the
* future: use SVE_PT_FPSIMD_SIZE(vq, flags) to compute the total size.
* SVE_PT_FPSIMD_SIZE(vq, flags) will never be less than
* sizeof(struct user_fpsimd_state).
*/
#define SVE_PT_FPSIMD_OFFSET SVE_PT_REGS_OFFSET
#define SVE_PT_FPSIMD_SIZE(vq, flags) (sizeof(struct user_fpsimd_state))
/*
* (flags & SVE_PT_REGS_MASK) == SVE_PT_REGS_SVE case:
*
* The payload starts at offset SVE_PT_SVE_OFFSET, and is of size
* SVE_PT_SVE_SIZE(vq, flags).
*
* Additional macros describe the contents and layout of the payload.
* For each, SVE_PT_SVE_x_OFFSET(args) is the start offset relative to
* the start of struct user_sve_header, and SVE_PT_SVE_x_SIZE(args) is
* the size in bytes:
*
* x type description
* - ---- -----------
* ZREGS \
* ZREG |
* PREGS | refer to <asm/sigcontext.h>
* PREG |
* FFR /
*
* FPSR uint32_t FPSR
* FPCR uint32_t FPCR
*
* Additional data might be appended in the future.
*/
#define SVE_PT_SVE_ZREG_SIZE(vq) SVE_SIG_ZREG_SIZE(vq)
#define SVE_PT_SVE_PREG_SIZE(vq) SVE_SIG_PREG_SIZE(vq)
#define SVE_PT_SVE_FFR_SIZE(vq) SVE_SIG_FFR_SIZE(vq)
#define SVE_PT_SVE_FPSR_SIZE sizeof(uint32_t)
#define SVE_PT_SVE_FPCR_SIZE sizeof(uint32_t)
#define __SVE_SIG_TO_PT(offset) \
((offset) - SVE_SIG_REGS_OFFSET + SVE_PT_REGS_OFFSET)
#define SVE_PT_SVE_OFFSET SVE_PT_REGS_OFFSET
#define SVE_PT_SVE_ZREGS_OFFSET \
__SVE_SIG_TO_PT(SVE_SIG_ZREGS_OFFSET)
#define SVE_PT_SVE_ZREG_OFFSET(vq, n) \
__SVE_SIG_TO_PT(SVE_SIG_ZREG_OFFSET(vq, n))
#define SVE_PT_SVE_ZREGS_SIZE(vq) \
(SVE_PT_SVE_ZREG_OFFSET(vq, SVE_NUM_ZREGS) - SVE_PT_SVE_ZREGS_OFFSET)
#define SVE_PT_SVE_PREGS_OFFSET(vq) \
__SVE_SIG_TO_PT(SVE_SIG_PREGS_OFFSET(vq))
#define SVE_PT_SVE_PREG_OFFSET(vq, n) \
__SVE_SIG_TO_PT(SVE_SIG_PREG_OFFSET(vq, n))
#define SVE_PT_SVE_PREGS_SIZE(vq) \
(SVE_PT_SVE_PREG_OFFSET(vq, SVE_NUM_PREGS) - \
SVE_PT_SVE_PREGS_OFFSET(vq))
#define SVE_PT_SVE_FFR_OFFSET(vq) \
__SVE_SIG_TO_PT(SVE_SIG_FFR_OFFSET(vq))
#define SVE_PT_SVE_FPSR_OFFSET(vq) \
((SVE_PT_SVE_FFR_OFFSET(vq) + SVE_PT_SVE_FFR_SIZE(vq) + 15) / 16 * 16)
#define SVE_PT_SVE_FPCR_OFFSET(vq) \
(SVE_PT_SVE_FPSR_OFFSET(vq) + SVE_PT_SVE_FPSR_SIZE)
/*
* Any future extension appended after FPCR must be aligned to the next
* 128-bit boundary.
*/
#define SVE_PT_SVE_SIZE(vq, flags) \
((SVE_PT_SVE_FPCR_OFFSET(vq) + SVE_PT_SVE_FPCR_SIZE - \
SVE_PT_SVE_OFFSET + 15) / 16 * 16)
#define SVE_PT_SIZE(vq, flags) \
(((flags) & SVE_PT_REGS_MASK) == SVE_PT_REGS_SVE ? \
SVE_PT_SVE_OFFSET + SVE_PT_SVE_SIZE(vq, flags) \
: SVE_PT_FPSIMD_OFFSET + SVE_PT_FPSIMD_SIZE(vq, flags))
#endif /* !__ASSEMBLY__ */
#endif /* !__HEADER_ARM64_COMMON_PTRACE_H */

View File

@ -0,0 +1,129 @@
/* registers.h COPYRIGHT FUJITSU LIMITED 2015-2016 */
#ifndef __HEADER_ARM64_COMMON_REGISTERS_H
#define __HEADER_ARM64_COMMON_REGISTERS_H
#include <types.h>
#include <arch/cpu.h>
#define RFLAGS_CF (1 << 0)
#define RFLAGS_PF (1 << 2)
#define RFLAGS_AF (1 << 4)
#define RFLAGS_ZF (1 << 6)
#define RFLAGS_SF (1 << 7)
#define RFLAGS_TF (1 << 8)
#define RFLAGS_IF (1 << 9)
#define RFLAGS_DF (1 << 10)
#define RFLAGS_OF (1 << 11)
#define RFLAGS_IOPL (3 << 12)
#define RFLAGS_NT (1 << 14)
#define RFLAGS_RF (1 << 16)
#define RFLAGS_VM (1 << 17)
#define RFLAGS_AC (1 << 18)
#define RFLAGS_VIF (1 << 19)
#define RFLAGS_VIP (1 << 20)
#define RFLAGS_ID (1 << 21)
#define DB6_B0 (1 << 0)
#define DB6_B1 (1 << 1)
#define DB6_B2 (1 << 2)
#define DB6_B3 (1 << 3)
#define DB6_BD (1 << 13)
#define DB6_BS (1 << 14)
#define DB6_BT (1 << 15)
#define MSR_EFER 0xc0000080
#define MSR_STAR 0xc0000081
#define MSR_LSTAR 0xc0000082
#define MSR_FMASK 0xc0000084
#define MSR_FS_BASE 0xc0000100
#define MSR_GS_BASE 0xc0000101
#define MSR_IA32_APIC_BASE 0x000000001b
#define MSR_PLATFORM_INFO 0x000000ce
#define MSR_IA32_PERF_CTL 0x00000199
#define MSR_IA32_MISC_ENABLE 0x000001a0
#define MSR_IA32_ENERGY_PERF_BIAS 0x000001b0
#define MSR_NHM_TURBO_RATIO_LIMIT 0x000001ad
#define MSR_IA32_CR_PAT 0x00000277
#define CVAL(event, mask) \
((((event) & 0xf00) << 24) | ((mask) << 8) | ((event) & 0xff))
#define CVAL2(event, mask, inv, count) \
((((event) & 0xf00) << 24) | ((mask) << 8) | ((event) & 0xff) | \
((inv & 1) << 23) | ((count & 0xff) << 24))
/* AMD */
#define MSR_PERF_CTL_0 0xc0010000
#define MSR_PERF_CTR_0 0xc0010004
static unsigned long xgetbv(unsigned int index)
{
return 0;
}
static void xsetbv(unsigned int index, unsigned long val)
{
}
static unsigned long rdpmc(unsigned int counter)
{
return 0;
}
static unsigned long rdmsr(unsigned int index)
{
return 0;
}
/* @ref.impl linux-linaro/arch/arm64/include/asm/arch_timer.h::arch_counter_get_cntvct */
static unsigned long rdtsc(void)
{
unsigned long cval;
isb();
asm volatile("mrs %0, cntvct_el0" : "=r" (cval));
return cval;
}
static void set_perfctl(int counter, int event, int mask)
{
}
static void start_perfctr(int counter)
{
}
static void stop_perfctr(int counter)
{
}
static void clear_perfctl(int counter)
{
}
static void set_perfctr(int counter, unsigned long value)
{
}
static unsigned long read_perfctr(int counter)
{
return 0;
}
#define ihk_mc_mb() do {} while(0);
#define REGS_GET_STACK_POINTER(regs) (((struct pt_regs *)regs)->sp)
enum arm64_pf_error_code {
PF_PROT = 1 << 0,
PF_WRITE = 1 << 1,
PF_USER = 1 << 2,
PF_RSVD = 1 << 3,
PF_INSTR = 1 << 4,
PF_PATCH = 1 << 29,
PF_POPULATE = 1 << 30,
};
#endif /* !__HEADER_ARM64_COMMON_REGISTERS_H */

View File

@ -0,0 +1,96 @@
/* rlimit.h COPYRIGHT FUJITSU LIMITED 2016 */
/**
* \file rlimit.h
* License details are found in the file LICENSE.
* \brief
* Kinds of resource limit
* \author Taku Shimosawa <shimosawa@is.s.u-tokyo.ac.jp> \par
* Copyright (C) 2011 - 2012 Taku Shimosawa
*/
/*
* HISTORY
*/
#ifndef __HEADER_ARM64_COMMON_RLIMIT_H
#define __HEADER_ARM64_COMMON_RLIMIT_H
/* Kinds of resource limit. */
enum __rlimit_resource
{
/* Per-process CPU limit, in seconds. */
RLIMIT_CPU = 0,
#define RLIMIT_CPU RLIMIT_CPU
/* Largest file that can be created, in bytes. */
RLIMIT_FSIZE = 1,
#define RLIMIT_FSIZE RLIMIT_FSIZE
/* Maximum size of data segment, in bytes. */
RLIMIT_DATA = 2,
#define RLIMIT_DATA RLIMIT_DATA
/* Maximum size of stack segment, in bytes. */
RLIMIT_STACK = 3,
#define RLIMIT_STACK RLIMIT_STACK
/* Largest core file that can be created, in bytes. */
RLIMIT_CORE = 4,
#define RLIMIT_CORE RLIMIT_CORE
/* Largest resident set size, in bytes.
This affects swapping; processes that are exceeding their
resident set size will be more likely to have physical memory
taken from them. */
__RLIMIT_RSS = 5,
#define RLIMIT_RSS __RLIMIT_RSS
/* Number of open files. */
RLIMIT_NOFILE = 7,
__RLIMIT_OFILE = RLIMIT_NOFILE, /* BSD name for same. */
#define RLIMIT_NOFILE RLIMIT_NOFILE
#define RLIMIT_OFILE __RLIMIT_OFILE
/* Address space limit. */
RLIMIT_AS = 9,
#define RLIMIT_AS RLIMIT_AS
/* Number of processes. */
__RLIMIT_NPROC = 6,
#define RLIMIT_NPROC __RLIMIT_NPROC
/* Locked-in-memory address space. */
__RLIMIT_MEMLOCK = 8,
#define RLIMIT_MEMLOCK __RLIMIT_MEMLOCK
/* Maximum number of file locks. */
__RLIMIT_LOCKS = 10,
#define RLIMIT_LOCKS __RLIMIT_LOCKS
/* Maximum number of pending signals. */
__RLIMIT_SIGPENDING = 11,
#define RLIMIT_SIGPENDING __RLIMIT_SIGPENDING
/* Maximum bytes in POSIX message queues. */
__RLIMIT_MSGQUEUE = 12,
#define RLIMIT_MSGQUEUE __RLIMIT_MSGQUEUE
/* Maximum nice priority allowed to raise to.
Nice levels 19 .. -20 correspond to 0 .. 39
values of this resource limit. */
__RLIMIT_NICE = 13,
#define RLIMIT_NICE __RLIMIT_NICE
/* Maximum realtime priority allowed for non-priviledged
processes. */
__RLIMIT_RTPRIO = 14,
#define RLIMIT_RTPRIO __RLIMIT_RTPRIO
__RLIMIT_NLIMITS = 15,
__RLIM_NLIMITS = __RLIMIT_NLIMITS
#define RLIMIT_NLIMITS __RLIMIT_NLIMITS
#define RLIM_NLIMITS __RLIM_NLIMITS
};
#include <generic-rlimit.h>
#endif

View File

@ -0,0 +1,409 @@
/* signal.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
#ifndef __HEADER_ARM64_COMMON_SIGNAL_H
#define __HEADER_ARM64_COMMON_SIGNAL_H
#include <fpsimd.h>
#include <ihk/types.h>
#define _NSIG 64
#define _NSIG_BPW 64
#define _NSIG_WORDS (_NSIG / _NSIG_BPW)
typedef unsigned long int __sigset_t;
#define __sigmask(sig) (((__sigset_t) 1) << ((sig) - 1))
typedef struct {
__sigset_t __val[_NSIG_WORDS];
} sigset_t;
#define SIG_BLOCK 0
#define SIG_UNBLOCK 1
#define SIG_SETMASK 2
struct sigaction {
void (*sa_handler)(int);
unsigned long sa_flags;
void (*sa_restorer)(int);
sigset_t sa_mask;
};
typedef void __sig_fn_t(int);
typedef __sig_fn_t *__sig_handler_t;
#define SIG_DFL (__sig_handler_t)0
#define SIG_IGN (__sig_handler_t)1
#define SIG_ERR (__sig_handler_t)-1
#define SA_NOCLDSTOP 0x00000001U
#define SA_NOCLDWAIT 0x00000002U
#define SA_NODEFER 0x40000000U
#define SA_ONSTACK 0x08000000U
#define SA_RESETHAND 0x80000000U
#define SA_RESTART 0x10000000U
#define SA_SIGINFO 0x00000004U
/* Required for AArch32 compatibility. */
#define SA_RESTORER 0x04000000U
struct k_sigaction {
struct sigaction sa;
};
typedef struct sigaltstack {
void *ss_sp;
int ss_flags;
size_t ss_size;
} stack_t;
#define MINSIGSTKSZ 5120
#define SS_ONSTACK 1
#define SS_DISABLE 2
typedef union sigval {
int sival_int;
void *sival_ptr;
} sigval_t;
#define __SI_MAX_SIZE 128
#define __SI_PAD_SIZE ((__SI_MAX_SIZE / sizeof (int)) - 4)
typedef struct siginfo {
int si_signo; /* Signal number. */
int si_errno; /* If non-zero, an errno value associated with
this signal, as defined in <errno.h>. */
int si_code; /* Signal code. */
#define SI_USER 0 /* sent by kill, sigsend, raise */
#define SI_KERNEL 0x80 /* sent by the kernel from somewhere */
#define SI_QUEUE -1 /* sent by sigqueue */
#define SI_TIMER __SI_CODE(__SI_TIMER,-2) /* sent by timer expiration */
#define SI_MESGQ __SI_CODE(__SI_MESGQ,-3) /* sent by real time mesq state change
*/
#define SI_ASYNCIO -4 /* sent by AIO completion */
#define SI_SIGIO -5 /* sent by queued SIGIO */
#define SI_TKILL -6 /* sent by tkill system call */
#define SI_DETHREAD -7 /* sent by execve() killing subsidiary threads */
#define ILL_ILLOPC 1 /* illegal opcode */
#define ILL_ILLOPN 2 /* illegal operand */
#define ILL_ILLADR 3 /* illegal addressing mode */
#define ILL_ILLTRP 4 /* illegal trap */
#define ILL_PRVOPC 5 /* privileged opcode */
#define ILL_PRVREG 6 /* privileged register */
#define ILL_COPROC 7 /* coprocessor error */
#define ILL_BADSTK 8 /* internal stack error */
#define FPE_INTDIV 1 /* integer divide by zero */
#define FPE_INTOVF 2 /* integer overflow */
#define FPE_FLTDIV 3 /* floating point divide by zero */
#define FPE_FLTOVF 4 /* floating point overflow */
#define FPE_FLTUND 5 /* floating point underflow */
#define FPE_FLTRES 6 /* floating point inexact result */
#define FPE_FLTINV 7 /* floating point invalid operation */
#define FPE_FLTSUB 8 /* subscript out of range */
#define SEGV_MAPERR 1 /* address not mapped to object */
#define SEGV_ACCERR 2 /* invalid permissions for mapped object */
#define BUS_ADRALN 1 /* invalid address alignment */
#define BUS_ADRERR 2 /* non-existant physical address */
#define BUS_OBJERR 3 /* object specific hardware error */
/* hardware memory error consumed on a machine check: action required */
#define BUS_MCEERR_AR 4
/* hardware memory error detected in process but not consumed: action optional*/
#define BUS_MCEERR_AO 5
#define TRAP_BRKPT 1 /* process breakpoint */
#define TRAP_TRACE 2 /* process trace trap */
#define TRAP_BRANCH 3 /* process taken branch trap */
#define TRAP_HWBKPT 4 /* hardware breakpoint/watchpoint */
#define CLD_EXITED 1 /* child has exited */
#define CLD_KILLED 2 /* child was killed */
#define CLD_DUMPED 3 /* child terminated abnormally */
#define CLD_TRAPPED 4 /* traced child has trapped */
#define CLD_STOPPED 5 /* child has stopped */
#define CLD_CONTINUED 6 /* stopped child has continued */
#define POLL_IN 1 /* data input available */
#define POLL_OUT 2 /* output buffers available */
#define POLL_MSG 3 /* input message available */
#define POLL_ERR 4 /* i/o error */
#define POLL_PRI 5 /* high priority input available */
#define POLL_HUP 6 /* device disconnected */
#define SIGEV_SIGNAL 0 /* notify via signal */
#define SIGEV_NONE 1 /* other notification: meaningless */
#define SIGEV_THREAD 2 /* deliver via thread creation */
#define SIGEV_THREAD_ID 4 /* deliver to thread */
union {
int _pad[__SI_PAD_SIZE];
/* kill(). */
struct {
int si_pid;/* Sending process ID. */
int si_uid;/* Real user ID of sending process. */
} _kill;
/* POSIX.1b timers. */
struct {
int si_tid; /* Timer ID. */
int si_overrun; /* Overrun count. */
sigval_t si_sigval; /* Signal value. */
} _timer;
/* POSIX.1b signals. */
struct {
int si_pid; /* Sending process ID. */
int si_uid; /* Real user ID of sending process. */
sigval_t si_sigval; /* Signal value. */
} _rt;
/* SIGCHLD. */
struct {
int si_pid; /* Which child. */
int si_uid; /* Real user ID of sending process. */
int si_status; /* Exit value or signal. */
long si_utime;
long si_stime;
} _sigchld;
/* SIGILL, SIGFPE, SIGSEGV, SIGBUS. */
struct {
void *si_addr; /* Faulting insn/memory ref. */
} _sigfault;
/* SIGPOLL. */
struct {
long int si_band; /* Band event for SIGPOLL. */
int si_fd;
} _sigpoll;
} _sifields;
} siginfo_t;
struct signalfd_siginfo {
unsigned int ssi_signo;
int ssi_errno;
int ssi_code;
unsigned int ssi_pid;
unsigned int ssi_uid;
int ssi_fd;
unsigned int ssi_tid;
unsigned int ssi_band;
unsigned int ssi_overrun;
unsigned int ssi_trapno;
int ssi_status;
int ssi_int;
unsigned long ssi_ptr;
unsigned long ssi_utime;
unsigned long ssi_stime;
unsigned long ssi_addr;
unsigned short ssi_addr_lsb;
char __pad[46];
};
#define SIGHUP 1
#define SIGINT 2
#define SIGQUIT 3
#define SIGILL 4
#define SIGTRAP 5
#define SIGABRT 6
#define SIGIOT 6
#define SIGBUS 7
#define SIGFPE 8
#define SIGKILL 9
#define SIGUSR1 10
#define SIGSEGV 11
#define SIGUSR2 12
#define SIGPIPE 13
#define SIGALRM 14
#define SIGTERM 15
#define SIGSTKFLT 16
#define SIGCHLD 17
#define SIGCONT 18
#define SIGSTOP 19
#define SIGTSTP 20
#define SIGTTIN 21
#define SIGTTOU 22
#define SIGURG 23
#define SIGXCPU 24
#define SIGXFSZ 25
#define SIGVTALRM 26
#define SIGPROF 27
#define SIGWINCH 28
#define SIGIO 29
#define SIGPOLL SIGIO
#define SIGPWR 30
#define SIGSYS 31
#define SIGUNUSED 31
#define SIGRTMIN 32
#ifndef SIGRTMAX
#define SIGRTMAX _NSIG
#endif
#define PTRACE_EVENT_EXEC 4
/*
* @ref.impl linux-linaro/arch/arm64/include/uapi/asm/sigcontext.h
*/
struct sigcontext {
unsigned long fault_address;
/* AArch64 registers */
unsigned long regs[31];
unsigned long sp;
unsigned long pc;
unsigned long pstate;
/* 4K reserved for FP/SIMD state and future expansion */
unsigned char __reserved[4096] /*__attribute__((__aligned__(16)))*/;
};
/*
* Header to be used at the beginning of structures extending the user
* context. Such structures must be placed after the rt_sigframe on the stack
* and be 16-byte aligned. The last structure must be a dummy one with the
* magic and size set to 0.
*/
struct _aarch64_ctx {
unsigned int magic;
unsigned int size;
};
#define FPSIMD_MAGIC 0x46508001
struct fpsimd_context {
struct _aarch64_ctx head;
unsigned int fpsr;
unsigned int fpcr;
__uint128_t vregs[32];
};
/* ESR_EL1 context */
#define ESR_MAGIC 0x45535201
struct esr_context {
struct _aarch64_ctx head;
unsigned long esr;
};
#define EXTRA_MAGIC 0x45585401
struct extra_context {
struct _aarch64_ctx head;
void *data; /* 16-byte aligned pointer to the extra space */
uint32_t size; /* size in bytes of the extra space */
};
#define SVE_MAGIC 0x53564501
#define fpsimd_sve_state(vq) { \
__uint128_t zregs[32][vq]; \
uint16_t pregs[16][vq]; \
uint16_t ffr[vq]; \
}
struct sve_context {
struct _aarch64_ctx head;
uint16_t vl;
uint16_t __reserved[3];
};
/*
* The SVE architecture leaves space for future expansion of the
* vector length beyond its initial architectural limit of 2048 bits
* (16 quadwords).
*/
#define SVE_VQ_MIN 1
#define SVE_VQ_MAX 0x200
#define SVE_VL_MIN (SVE_VQ_MIN * 0x10)
#define SVE_VL_MAX (SVE_VQ_MAX * 0x10)
#define SVE_NUM_ZREGS 32
#define SVE_NUM_PREGS 16
#define sve_vl_valid(vl) \
((vl) % 0x10 == 0 && (vl) >= SVE_VL_MIN && (vl) <= SVE_VL_MAX)
#define sve_vq_from_vl(vl) ((vl) / 0x10)
/*
* The total size of meaningful data in the SVE context in bytes,
* including the header, is given by SVE_SIG_CONTEXT_SIZE(vq).
*
* Note: for all these macros, the "vq" argument denotes the SVE
* vector length in quadwords (i.e., units of 128 bits).
*
* The correct way to obtain vq is to use sve_vq_from_vl(vl). The
* result is valid if and only if sve_vl_valid(vl) is true. This is
* guaranteed for a struct sve_context written by the kernel.
*
*
* Additional macros describe the contents and layout of the payload.
* For each, SVE_SIG_x_OFFSET(args) is the start offset relative to
* the start of struct sve_context, and SVE_SIG_x_SIZE(args) is the
* size in bytes:
*
*
* x type description
* - ---- -----------
* REGS the entire SVE context
*
* ZREGS __uint128_t[SVE_NUM_ZREGS][vq] all Z-registers
* ZREG __uint128_t[vq] individual Z-register Zn
*
* PREGS uint16_t[SVE_NUM_PREGS][vq] all P-registers
* PREG uint16_t[vq] individual P-register Pn
*
* FFR uint16_t[vq] first-fault status register
*
* Additional data might be appended in the future.
*/
#define SVE_SIG_ZREG_SIZE(vq) ((uint32_t)(vq) * 16)
#define SVE_SIG_PREG_SIZE(vq) ((uint32_t)(vq) * 2)
#define SVE_SIG_FFR_SIZE(vq) SVE_SIG_PREG_SIZE(vq)
#define SVE_SIG_REGS_OFFSET ((sizeof(struct sve_context) + 15) / 16 * 16)
#define SVE_SIG_ZREGS_OFFSET SVE_SIG_REGS_OFFSET
#define SVE_SIG_ZREG_OFFSET(vq, n) \
(SVE_SIG_ZREGS_OFFSET + SVE_SIG_ZREG_SIZE(vq) * (n))
#define SVE_SIG_ZREGS_SIZE(vq) \
(SVE_SIG_ZREG_OFFSET(vq, SVE_NUM_ZREGS) - SVE_SIG_ZREGS_OFFSET)
#define SVE_SIG_PREGS_OFFSET(vq) \
(SVE_SIG_ZREGS_OFFSET + SVE_SIG_ZREGS_SIZE(vq))
#define SVE_SIG_PREG_OFFSET(vq, n) \
(SVE_SIG_PREGS_OFFSET(vq) + SVE_SIG_PREG_SIZE(vq) * (n))
#define SVE_SIG_PREGS_SIZE(vq) \
(SVE_SIG_PREG_OFFSET(vq, SVE_NUM_PREGS) - SVE_SIG_PREGS_OFFSET(vq))
#define SVE_SIG_FFR_OFFSET(vq) \
(SVE_SIG_PREGS_OFFSET(vq) + SVE_SIG_PREGS_SIZE(vq))
#define SVE_SIG_REGS_SIZE(vq) \
(SVE_SIG_FFR_OFFSET(vq) + SVE_SIG_FFR_SIZE(vq) - SVE_SIG_REGS_OFFSET)
#define SVE_SIG_CONTEXT_SIZE(vq) (SVE_SIG_REGS_OFFSET + SVE_SIG_REGS_SIZE(vq))
/*
* @ref.impl linux-linaro/arch/arm64/include/asm/ucontext.h
*/
struct ucontext {
unsigned long uc_flags;
struct ucontext *uc_link;
stack_t uc_stack;
sigset_t uc_sigmask;
/* glibc uses a 1024-bit sigset_t */
unsigned char __unused[1024 / 8 - sizeof(sigset_t)];
/* last for future expansion */
struct sigcontext uc_mcontext;
};
void arm64_notify_die(const char *str, struct pt_regs *regs, struct siginfo *info, int err);
void set_signal(int sig, void *regs, struct siginfo *info);
void check_signal(unsigned long rc, void *regs, int num);
void check_signal_irq_disabled(unsigned long rc, void *regs, int num);
#endif /* __HEADER_ARM64_COMMON_SIGNAL_H */

View File

@ -0,0 +1,23 @@
/* smp.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef __HEADER_ARM64_COMMON_SMP_H
#define __HEADER_ARM64_COMMON_SMP_H
#ifndef __ASSEMBLY__
/*
* Initial data for bringing up a secondary CPU.
*/
struct secondary_data {
void *stack;
unsigned long next_pc;
unsigned long arg;
};
extern struct secondary_data secondary_data;
#endif /* __ASSEMBLY__ */
/* struct secondary_data offsets */
#define SECONDARY_DATA_STACK 0x00
#define SECONDARY_DATA_NEXT_PC 0x08
#define SECONDARY_DATA_ARG 0x10
#endif /* !__HEADER_ARM64_COMMON_SMP_H */

View File

@ -0,0 +1,17 @@
/* stringify.h COPYRIGHT FUJITSU LIMITED 2017 */
/**
* @ref.impl host-kernel/include/linux/stringify.h
*/
#ifndef __LINUX_STRINGIFY_H
#define __LINUX_STRINGIFY_H
/* Indirect stringification. Doing two levels allows the parameter to be a
* macro itself. For example, compile with -DFOO=bar, __stringify(FOO)
* converts to "bar".
*/
#define __stringify_1(x...)#x
#define __stringify(x...)__stringify_1(x)
#endif/* !__LINUX_STRINGIFY_H */

View File

@ -0,0 +1,148 @@
/* syscall_list.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
SYSCALL_DELEGATED(4, io_getevents)
SYSCALL_DELEGATED(17, getcwd)
SYSCALL_DELEGATED(22, epoll_pwait)
SYSCALL_DELEGATED(25, fcntl)
SYSCALL_HANDLED(29, ioctl)
SYSCALL_DELEGATED(43, statfs)
SYSCALL_DELEGATED(44, fstatfs)
#ifdef POSTK_DEBUG_ARCH_DEP_62 /* Absorb the difference between open and openat args. */
SYSCALL_HANDLED(56, openat)
#else /* POSTK_DEBUG_ARCH_DEP_62 */
SYSCALL_DELEGATED(56, openat)
#endif /* POSTK_DEBUG_ARCH_DEP_62 */
SYSCALL_HANDLED(57, close)
SYSCALL_DELEGATED(61, getdents64)
SYSCALL_DELEGATED(62, lseek)
SYSCALL_HANDLED(63, read)
SYSCALL_DELEGATED(64, write)
SYSCALL_DELEGATED(66, writev)
SYSCALL_DELEGATED(67, pread64)
SYSCALL_DELEGATED(68, pwrite64)
SYSCALL_DELEGATED(72, pselect6)
SYSCALL_DELEGATED(73, ppoll)
SYSCALL_HANDLED(74, signalfd4)
SYSCALL_DELEGATED(78, readlinkat)
SYSCALL_DELEGATED(80, fstat)
SYSCALL_HANDLED(93, exit)
SYSCALL_HANDLED(94, exit_group)
SYSCALL_HANDLED(95, waitid)
SYSCALL_HANDLED(96, set_tid_address)
SYSCALL_HANDLED(98, futex)
SYSCALL_HANDLED(99, set_robust_list)
SYSCALL_HANDLED(101, nanosleep)
SYSCALL_HANDLED(102, getitimer)
SYSCALL_HANDLED(103, setitimer)
SYSCALL_HANDLED(113, clock_gettime)
SYSCALL_DELEGATED(114, clock_getres)
SYSCALL_DELEGATED(115, clock_nanosleep)
SYSCALL_HANDLED(117, ptrace)
SYSCALL_HANDLED(118, sched_setparam)
SYSCALL_HANDLED(119, sched_setscheduler)
SYSCALL_HANDLED(120, sched_getscheduler)
SYSCALL_HANDLED(121, sched_getparam)
SYSCALL_HANDLED(122, sched_setaffinity)
SYSCALL_HANDLED(123, sched_getaffinity)
SYSCALL_HANDLED(124, sched_yield)
SYSCALL_HANDLED(125, sched_get_priority_max)
SYSCALL_HANDLED(126, sched_get_priority_min)
SYSCALL_HANDLED(127, sched_rr_get_interval)
SYSCALL_HANDLED(129, kill)
SYSCALL_HANDLED(130, tkill)
SYSCALL_HANDLED(131, tgkill)
SYSCALL_HANDLED(132, sigaltstack)
SYSCALL_HANDLED(133, rt_sigsuspend)
SYSCALL_HANDLED(134, rt_sigaction)
SYSCALL_HANDLED(135, rt_sigprocmask)
SYSCALL_HANDLED(136, rt_sigpending)
SYSCALL_HANDLED(137, rt_sigtimedwait)
SYSCALL_HANDLED(138, rt_sigqueueinfo)
SYSCALL_HANDLED(139, rt_sigreturn)
SYSCALL_HANDLED(143, setregid)
SYSCALL_HANDLED(144, setgid)
SYSCALL_HANDLED(145, setreuid)
SYSCALL_HANDLED(146, setuid)
SYSCALL_HANDLED(147, setresuid)
SYSCALL_HANDLED(148, getresuid)
SYSCALL_HANDLED(149, setresgid)
SYSCALL_HANDLED(150, getresgid)
SYSCALL_HANDLED(151, setfsuid)
SYSCALL_HANDLED(152, setfsgid)
SYSCALL_HANDLED(153, times)
SYSCALL_HANDLED(154, setpgid)
SYSCALL_DELEGATED(160, uname)
SYSCALL_HANDLED(163, getrlimit)
SYSCALL_HANDLED(164, setrlimit)
SYSCALL_HANDLED(165, getrusage)
SYSCALL_HANDLED(167, prctl)
SYSCALL_HANDLED(168, getcpu)
SYSCALL_HANDLED(169, gettimeofday)
SYSCALL_HANDLED(170, settimeofday)
SYSCALL_HANDLED(172, getpid)
SYSCALL_HANDLED(173, getppid)
SYSCALL_HANDLED(174, getuid)
SYSCALL_HANDLED(175, geteuid)
SYSCALL_HANDLED(176, getgid)
SYSCALL_HANDLED(177, getegid)
SYSCALL_HANDLED(178, gettid)
SYSCALL_DELEGATED(188, msgrcv)
SYSCALL_DELEGATED(189, msgsnd)
SYSCALL_DELEGATED(192, semtimedop)
SYSCALL_DELEGATED(193, semop)
SYSCALL_HANDLED(194, shmget)
SYSCALL_HANDLED(195, shmctl)
SYSCALL_HANDLED(196, shmat)
SYSCALL_HANDLED(197, shmdt)
SYSCALL_HANDLED(214, brk)
SYSCALL_HANDLED(215, munmap)
SYSCALL_HANDLED(216, mremap)
SYSCALL_HANDLED(220, clone)
SYSCALL_HANDLED(221, execve)
SYSCALL_HANDLED(222, mmap)
SYSCALL_HANDLED(226, mprotect)
SYSCALL_HANDLED(227, msync)
SYSCALL_HANDLED(228, mlock)
SYSCALL_HANDLED(229, munlock)
SYSCALL_HANDLED(230, mlockall)
SYSCALL_HANDLED(231, munlockall)
SYSCALL_HANDLED(232, mincore)
SYSCALL_HANDLED(233, madvise)
SYSCALL_HANDLED(234, remap_file_pages)
SYSCALL_HANDLED(235, mbind)
SYSCALL_HANDLED(236, get_mempolicy)
SYSCALL_HANDLED(237, set_mempolicy)
SYSCALL_HANDLED(238, migrate_pages)
SYSCALL_HANDLED(239, move_pages)
SYSCALL_HANDLED(241, perf_event_open)
SYSCALL_HANDLED(260, wait4)
SYSCALL_HANDLED(270, process_vm_readv)
SYSCALL_HANDLED(271, process_vm_writev)
SYSCALL_HANDLED(601, pmc_init)
SYSCALL_HANDLED(602, pmc_start)
SYSCALL_HANDLED(603, pmc_stop)
SYSCALL_HANDLED(604, pmc_reset)
SYSCALL_HANDLED(700, get_cpu_id)
#ifdef PROFILE_ENABLE
SYSCALL_HANDLED(__NR_profile, profile)
#endif // PROFILE_ENABLE
SYSCALL_HANDLED(730, util_migrate_inter_kernel)
SYSCALL_HANDLED(731, util_indicate_clone)
SYSCALL_HANDLED(732, get_system)
/* McKernel Specific */
SYSCALL_HANDLED(801, swapout)
SYSCALL_HANDLED(802, linux_mlock)
SYSCALL_HANDLED(803, suspend_threads)
SYSCALL_HANDLED(804, resume_threads)
SYSCALL_HANDLED(811, linux_spawn)
SYSCALL_DELEGATED(1024, open)
SYSCALL_DELEGATED(1026, unlink)
SYSCALL_DELEGATED(1035, readlink)
SYSCALL_HANDLED(1045, signalfd)
SYSCALL_DELEGATED(1049, stat)
SYSCALL_DELEGATED(1060, getpgrp)
SYSCALL_DELEGATED(1062, time)
SYSCALL_HANDLED(1071, vfork)
SYSCALL_DELEGATED(1079, fork)

View File

@ -0,0 +1,339 @@
/* sysreg.h COPYRIGHT FUJITSU LIMITED 2016-2017 */
/*
* Macros for accessing system registers with older binutils.
*
* Copyright (C) 2014 ARM Ltd.
* Author: Catalin Marinas <catalin.marinas@arm.com>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ASM_SYSREG_H
#define __ASM_SYSREG_H
#include <types.h>
#include <stringify.h>
/*
* ARMv8 ARM reserves the following encoding for system registers:
* (Ref: ARMv8 ARM, Section: "System instruction class encoding overview",
* C5.2, version:ARM DDI 0487A.f)
* [20-19] : Op0
* [18-16] : Op1
* [15-12] : CRn
* [11-8] : CRm
* [7-5] : Op2
*/
#define Op0_shift 19
#define Op0_mask 0x3
#define Op1_shift 16
#define Op1_mask 0x7
#define CRn_shift 12
#define CRn_mask 0xf
#define CRm_shift 8
#define CRm_mask 0xf
#define Op2_shift 5
#define Op2_mask 0x7
#define sys_reg(op0, op1, crn, crm, op2) \
(((op0) << Op0_shift) | ((op1) << Op1_shift) | \
((crn) << CRn_shift) | ((crm) << CRm_shift) | \
((op2) << Op2_shift))
#define sys_reg_Op0(id) (((id) >> Op0_shift) & Op0_mask)
#define sys_reg_Op1(id) (((id) >> Op1_shift) & Op1_mask)
#define sys_reg_CRn(id) (((id) >> CRn_shift) & CRn_mask)
#define sys_reg_CRm(id) (((id) >> CRm_shift) & CRm_mask)
#define sys_reg_Op2(id) (((id) >> Op2_shift) & Op2_mask)
#ifdef __ASSEMBLY__
#define __emit_inst(x).inst (x)
#else
#define __emit_inst(x)".inst " __stringify((x)) "\n\t"
#endif
#define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0)
#define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5)
#define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6)
#define SYS_ID_PFR0_EL1 sys_reg(3, 0, 0, 1, 0)
#define SYS_ID_PFR1_EL1 sys_reg(3, 0, 0, 1, 1)
#define SYS_ID_DFR0_EL1 sys_reg(3, 0, 0, 1, 2)
#define SYS_ID_MMFR0_EL1 sys_reg(3, 0, 0, 1, 4)
#define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5)
#define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6)
#define SYS_ID_MMFR3_EL1 sys_reg(3, 0, 0, 1, 7)
#define SYS_ID_ISAR0_EL1 sys_reg(3, 0, 0, 2, 0)
#define SYS_ID_ISAR1_EL1 sys_reg(3, 0, 0, 2, 1)
#define SYS_ID_ISAR2_EL1 sys_reg(3, 0, 0, 2, 2)
#define SYS_ID_ISAR3_EL1 sys_reg(3, 0, 0, 2, 3)
#define SYS_ID_ISAR4_EL1 sys_reg(3, 0, 0, 2, 4)
#define SYS_ID_ISAR5_EL1 sys_reg(3, 0, 0, 2, 5)
#define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6)
#define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0)
#define SYS_MVFR1_EL1 sys_reg(3, 0, 0, 3, 1)
#define SYS_MVFR2_EL1 sys_reg(3, 0, 0, 3, 2)
#define SYS_ID_AA64PFR0_EL1 sys_reg(3, 0, 0, 4, 0)
#define SYS_ID_AA64PFR1_EL1 sys_reg(3, 0, 0, 4, 1)
#define SYS_ID_AA64ZFR0_EL1 sys_reg(3, 0, 0, 4, 4)
#define SYS_ID_AA64DFR0_EL1 sys_reg(3, 0, 0, 5, 0)
#define SYS_ID_AA64DFR1_EL1 sys_reg(3, 0, 0, 5, 1)
#define SYS_ID_AA64ISAR0_EL1 sys_reg(3, 0, 0, 6, 0)
#define SYS_ID_AA64ISAR1_EL1 sys_reg(3, 0, 0, 6, 1)
#define SYS_ID_AA64MMFR0_EL1 sys_reg(3, 0, 0, 7, 0)
#define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1)
#define SYS_ID_AA64MMFR2_EL1 sys_reg(3, 0, 0, 7, 2)
#define SYS_ZCR_EL1 sys_reg(3, 0, 1, 2, 0)
#define SYS_ZCR_EL2 sys_reg(3, 4, 1, 2, 0)
#define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0)
#define SYS_CTR_EL0 sys_reg(3, 3, 0, 0, 1)
#define SYS_DCZID_EL0 sys_reg(3, 3, 0, 0, 7)
#define REG_PSTATE_PAN_IMM sys_reg(0, 0, 4, 0, 4)
#define REG_PSTATE_UAO_IMM sys_reg(0, 0, 4, 0, 3)
/*
#define SET_PSTATE_PAN(x) __inst_arm(0xd5000000 | REG_PSTATE_PAN_IMM | \
(!!x)<<8 | 0x1f)
#define SET_PSTATE_UAO(x) __inst_arm(0xd5000000 | REG_PSTATE_UAO_IMM | \
(!!x)<<8 | 0x1f)
*/
/* Common SCTLR_ELx flags. */
#define SCTLR_ELx_EE (1 << 25)
#define SCTLR_ELx_I (1 << 12)
#define SCTLR_ELx_SA (1 << 3)
#define SCTLR_ELx_C (1 << 2)
#define SCTLR_ELx_A (1 << 1)
#define SCTLR_ELx_M 1
#define SCTLR_ELx_FLAGS (SCTLR_ELx_M | SCTLR_ELx_A | SCTLR_ELx_C | \
SCTLR_ELx_SA | SCTLR_ELx_I)
/* SCTLR_EL1 specific flags. */
#define SCTLR_EL1_UCI (1 << 26)
#define SCTLR_EL1_SPAN (1 << 23)
#define SCTLR_EL1_UCT (1 << 15)
#define SCTLR_EL1_SED (1 << 8)
#define SCTLR_EL1_CP15BEN (1 << 5)
/* id_aa64isar0 */
#define ID_AA64ISAR0_RDM_SHIFT 28
#define ID_AA64ISAR0_ATOMICS_SHIFT 20
#define ID_AA64ISAR0_CRC32_SHIFT 16
#define ID_AA64ISAR0_SHA2_SHIFT 12
#define ID_AA64ISAR0_SHA1_SHIFT 8
#define ID_AA64ISAR0_AES_SHIFT 4
/* id_aa64pfr0 */
#define ID_AA64PFR0_SVE_SHIFT 32
#define ID_AA64PFR0_GIC_SHIFT 24
#define ID_AA64PFR0_ASIMD_SHIFT 20
#define ID_AA64PFR0_FP_SHIFT 16
#define ID_AA64PFR0_EL3_SHIFT 12
#define ID_AA64PFR0_EL2_SHIFT 8
#define ID_AA64PFR0_EL1_SHIFT 4
#define ID_AA64PFR0_EL0_SHIFT 0
#define ID_AA64PFR0_SVE 0x1
#define ID_AA64PFR0_FP_NI 0xf
#define ID_AA64PFR0_FP_SUPPORTED 0x0
#define ID_AA64PFR0_ASIMD_NI 0xf
#define ID_AA64PFR0_ASIMD_SUPPORTED 0x0
#define ID_AA64PFR0_EL1_64BIT_ONLY 0x1
#define ID_AA64PFR0_EL0_64BIT_ONLY 0x1
#define ID_AA64PFR0_EL0_32BIT_64BIT 0x2
/* id_aa64mmfr0 */
#define ID_AA64MMFR0_TGRAN4_SHIFT 28
#define ID_AA64MMFR0_TGRAN64_SHIFT 24
#define ID_AA64MMFR0_TGRAN16_SHIFT 20
#define ID_AA64MMFR0_BIGENDEL0_SHIFT 16
#define ID_AA64MMFR0_SNSMEM_SHIFT 12
#define ID_AA64MMFR0_BIGENDEL_SHIFT 8
#define ID_AA64MMFR0_ASID_SHIFT 4
#define ID_AA64MMFR0_PARANGE_SHIFT 0
#define ID_AA64MMFR0_TGRAN4_NI 0xf
#define ID_AA64MMFR0_TGRAN4_SUPPORTED 0x0
#define ID_AA64MMFR0_TGRAN64_NI 0xf
#define ID_AA64MMFR0_TGRAN64_SUPPORTED 0x0
#define ID_AA64MMFR0_TGRAN16_NI 0x0
#define ID_AA64MMFR0_TGRAN16_SUPPORTED 0x1
/* id_aa64mmfr1 */
#define ID_AA64MMFR1_PAN_SHIFT 20
#define ID_AA64MMFR1_LOR_SHIFT 16
#define ID_AA64MMFR1_HPD_SHIFT 12
#define ID_AA64MMFR1_VHE_SHIFT 8
#define ID_AA64MMFR1_VMIDBITS_SHIFT 4
#define ID_AA64MMFR1_HADBS_SHIFT 0
#define ID_AA64MMFR1_VMIDBITS_8 0
#define ID_AA64MMFR1_VMIDBITS_16 2
/* id_aa64mmfr2 */
#define ID_AA64MMFR2_LVA_SHIFT 16
#define ID_AA64MMFR2_IESB_SHIFT 12
#define ID_AA64MMFR2_LSM_SHIFT 8
#define ID_AA64MMFR2_UAO_SHIFT 4
#define ID_AA64MMFR2_CNP_SHIFT 0
/* id_aa64dfr0 */
#define ID_AA64DFR0_PMSVER_SHIFT 32
#define ID_AA64DFR0_CTX_CMPS_SHIFT 28
#define ID_AA64DFR0_WRPS_SHIFT 20
#define ID_AA64DFR0_BRPS_SHIFT 12
#define ID_AA64DFR0_PMUVER_SHIFT 8
#define ID_AA64DFR0_TRACEVER_SHIFT 4
#define ID_AA64DFR0_DEBUGVER_SHIFT 0
#define ID_ISAR5_RDM_SHIFT 24
#define ID_ISAR5_CRC32_SHIFT 16
#define ID_ISAR5_SHA2_SHIFT 12
#define ID_ISAR5_SHA1_SHIFT 8
#define ID_ISAR5_AES_SHIFT 4
#define ID_ISAR5_SEVL_SHIFT 0
#define MVFR0_FPROUND_SHIFT 28
#define MVFR0_FPSHVEC_SHIFT 24
#define MVFR0_FPSQRT_SHIFT 20
#define MVFR0_FPDIVIDE_SHIFT 16
#define MVFR0_FPTRAP_SHIFT 12
#define MVFR0_FPDP_SHIFT 8
#define MVFR0_FPSP_SHIFT 4
#define MVFR0_SIMD_SHIFT 0
#define MVFR1_SIMDFMAC_SHIFT 28
#define MVFR1_FPHP_SHIFT 24
#define MVFR1_SIMDHP_SHIFT 20
#define MVFR1_SIMDSP_SHIFT 16
#define MVFR1_SIMDINT_SHIFT 12
#define MVFR1_SIMDLS_SHIFT 8
#define MVFR1_FPDNAN_SHIFT 4
#define MVFR1_FPFTZ_SHIFT 0
#define ID_AA64MMFR0_TGRAN4_SHIFT 28
#define ID_AA64MMFR0_TGRAN64_SHIFT 24
#define ID_AA64MMFR0_TGRAN16_SHIFT 20
#define ID_AA64MMFR0_TGRAN4_NI 0xf
#define ID_AA64MMFR0_TGRAN4_SUPPORTED 0x0
#define ID_AA64MMFR0_TGRAN64_NI 0xf
#define ID_AA64MMFR0_TGRAN64_SUPPORTED 0x0
#define ID_AA64MMFR0_TGRAN16_NI 0x0
#define ID_AA64MMFR0_TGRAN16_SUPPORTED 0x1
#if defined(CONFIG_ARM64_4K_PAGES)
#define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN4_SHIFT
#define ID_AA64MMFR0_TGRAN_SUPPORTED ID_AA64MMFR0_TGRAN4_SUPPORTED
#elif defined(CONFIG_ARM64_16K_PAGES)
#define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN16_SHIFT
#define ID_AA64MMFR0_TGRAN_SUPPORTED ID_AA64MMFR0_TGRAN16_SUPPORTED
#elif defined(CONFIG_ARM64_64K_PAGES)
#define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN64_SHIFT
#define ID_AA64MMFR0_TGRAN_SUPPORTED ID_AA64MMFR0_TGRAN64_SUPPORTED
#endif
#define ZCR_EL1_LEN_SHIFT 0
#define ZCR_EL1_LEN_SIZE 9
#define ZCR_EL1_LEN_MASK 0x1ff
#define CPACR_EL1_ZEN_EL1EN (1 << 16)
#define CPACR_EL1_ZEN_EL0EN (1 << 17)
#define CPACR_EL1_ZEN (CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN)
/* Safe value for MPIDR_EL1: Bit31:RES1, Bit30:U:0, Bit24:MT:0 */
#define SYS_MPIDR_SAFE_VAL (1UL << 31)
#ifdef __ASSEMBLY__
.irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
.equ .L__reg_num_x\num, \num
.endr
.equ .L__reg_num_xzr, 31
.macro mrs_s, rt, sreg
__emit_inst(0xd5200000|(\sreg)|(.L__reg_num_\rt))
.endm
.macro msr_s, sreg, rt
__emit_inst(0xd5000000|(\sreg)|(.L__reg_num_\rt))
.endm
#else
asm(
" .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30\n"
" .equ .L__reg_num_x\\num, \\num\n"
" .endr\n"
" .equ .L__reg_num_xzr, 31\n"
"\n"
" .macro mrs_s, rt, sreg\n"
__emit_inst(0xd5200000|(\\sreg)|(.L__reg_num_\\rt))
" .endm\n"
"\n"
" .macro msr_s, sreg, rt\n"
__emit_inst(0xd5000000|(\\sreg)|(.L__reg_num_\\rt))
" .endm\n"
);
#endif
/*
* Unlike read_cpuid, calls to read_sysreg are never expected to be
* optimized away or replaced with synthetic values.
*/
#define read_sysreg(r) ({ \
uint64_t __val; \
asm volatile("mrs %0, " __stringify(r) : "=r" (__val)); \
__val; \
})
/*
* The "Z" constraint normally means a zero immediate, but when combined with
* the "%x0" template means XZR.
*/
#define write_sysreg(v, r) do { \
uint64_t __val = (uint64_t)v; \
asm volatile("msr " __stringify(r) ", %x0" \
: : "rZ" (__val)); \
} while (0)
/*
* For registers without architectural names, or simply unsupported by
* GAS.
*/
#define read_sysreg_s(r) ({ \
uint64_t __val; \
asm volatile("mrs_s %0, " __stringify(r) : "=r" (__val)); \
__val; \
})
#define write_sysreg_s(v, r) do { \
uint64_t __val = (uint64_t)v; \
asm volatile("msr_s " __stringify(r) ", %x0" : : "rZ" (__val)); \
} while (0)
/* @ref.impl arch/arm64/include/asm/kvm_arm.h */
#define CPTR_EL2_TZ (1 << 8)
#endif /* __ASM_SYSREG_H */

View File

@ -0,0 +1,100 @@
/* thread_info.h COPYRIGHT FUJITSU LIMITED 2015-2017 */
#ifndef __HEADER_ARM64_COMMON_THREAD_INFO_H
#define __HEADER_ARM64_COMMON_THREAD_INFO_H
#define KERNEL_STACK_SIZE 32768 /* 8 page */
#define THREAD_START_SP KERNEL_STACK_SIZE - 16
#ifndef __ASSEMBLY__
#define ALIGN_UP(x, align) ALIGN_DOWN((x) + (align) - 1, align)
#define ALIGN_DOWN(x, align) ((x) & ~((align) - 1))
#include <process.h>
#include <prctl.h>
struct cpu_context {
unsigned long x19;
unsigned long x20;
unsigned long x21;
unsigned long x22;
unsigned long x23;
unsigned long x24;
unsigned long x25;
unsigned long x26;
unsigned long x27;
unsigned long x28;
unsigned long fp;
unsigned long sp;
unsigned long pc;
};
struct thread_info {
unsigned long flags; /* low level flags */
// mm_segment_t addr_limit; /* address limit */
// struct task_struct *task; /* main task structure */
// struct exec_domain *exec_domain; /* execution domain */
// struct restart_block restart_block;
// int preempt_count; /* 0 => preemptable, <0 => bug */
int cpu; /* cpu */
struct cpu_context cpu_context; /* kernel_context */
void *sve_state; /* SVE registers, if any */
uint16_t sve_vl; /* SVE vector length */
uint16_t sve_vl_onexec; /* SVE vl after next exec */
uint16_t sve_flags; /* SVE related flags */
unsigned long fault_address; /* fault info */
unsigned long fault_code; /* ESR_EL1 value */
};
/* Flags for sve_flags (intentionally defined to match the prctl flags) */
/* Inherit sve_vl and sve_flags across execve(): */
#define THREAD_VL_INHERIT PR_SVE_SET_VL_INHERIT
struct arm64_cpu_local_thread {
struct thread_info thread_info;
unsigned long paniced; /* 136 */
uint64_t panic_regs[34]; /* 144 */
};
union arm64_cpu_local_variables {
struct arm64_cpu_local_thread arm64_cpu_local_thread;
unsigned long stack[KERNEL_STACK_SIZE / sizeof(unsigned long)];
};
extern union arm64_cpu_local_variables init_thread_info;
/*
* how to get the current stack pointer from C
*/
register unsigned long current_stack_pointer asm ("sp");
/*
* how to get the thread information struct from C
*/
static inline struct thread_info *current_thread_info(void)
{
unsigned long ti = 0;
ti = ALIGN_DOWN(current_stack_pointer, KERNEL_STACK_SIZE);
return (struct thread_info *)ti;
}
/*
* how to get the pt_regs struct from C
*/
static inline struct pt_regs *current_pt_regs(void)
{
unsigned long regs = 0;
regs = ALIGN_DOWN(current_stack_pointer, KERNEL_STACK_SIZE);
regs += THREAD_START_SP - sizeof(struct pt_regs);
return (struct pt_regs *)regs;
}
#endif /* !__ASSEMBLY__ */
#define TIF_SINGLESTEP 21
#endif /* !__HEADER_ARM64_COMMON_THREAD_INFO_H */

View File

@ -0,0 +1,27 @@
/* traps.h COPYRIGHT FUJITSU LIMITED 2017 */
#ifndef __ASM_TRAP_H
#define __ASM_TRAP_H
#include <types.h>
struct pt_regs;
/* @ref.impl arch/arm64/include/asm/traps.h */
struct undef_hook {
struct list_head node;
uint32_t instr_mask;
uint32_t instr_val;
uint64_t pstate_mask;
uint64_t pstate_val;
int (*fn)(struct pt_regs *regs, uint32_t instr);
};
/* @ref.impl arch/arm64/include/asm/traps.h */
void register_undef_hook(struct undef_hook *hook);
/* @ref.impl arch/arm64/include/asm/traps.h */
void unregister_undef_hook(struct undef_hook *hook);
#endif /* __ASM_TRAP_H */

View File

@ -0,0 +1,30 @@
/* vdso.h COPYRIGHT FUJITSU LIMITED 2016 */
#ifndef __HEADER_ARM64_COMMON_VDSO_H
#define __HEADER_ARM64_COMMON_VDSO_H
#ifdef __KERNEL__
/* @ref.impl arch/arm64/include/asm/vsdo.h::VDSO_LBASE */
/*
* Default link address for the vDSO.
* Since we randomise the VDSO mapping, there's little point in trying
* to prelink this.
*/
#define VDSO_LBASE 0x0
#ifndef __ASSEMBLY__
#include <vdso-offsets.h>
/* @ref.impl arch/arm64/include/asm/vsdo.h::VDSO_SYMBOL */
#define VDSO_SYMBOL(base, name) vdso_symbol_##name((unsigned long)(base))
void* vdso_symbol_sigtramp(unsigned long base);
int add_vdso_pages(struct thread *thread);
#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* !__HEADER_ARM64_COMMON_VDSO_H */

View File

@ -0,0 +1,8 @@
/* virt.h COPYRIGHT FUJITSU LIMITED 2015 */
#ifndef __HEADER_ARM64_COMMON_VIRT_H
#define __HEADER_ARM64_COMMON_VIRT_H
#define BOOT_CPU_MODE_EL1 (0xe11)
#define BOOT_CPU_MODE_EL2 (0xe12)
#endif /* !__HEADER_ARM64_COMMON_VIRT_H */

View File

@ -0,0 +1,158 @@
/* irq-gic-v2.c COPYRIGHT FUJITSU LIMITED 2015-2016 */
#include <ihk/cpu.h>
#include <irq.h>
#include <arm-gic-v2.h>
#include <io.h>
#include <arch/cpu.h>
#include <memory.h>
#include <syscall.h>
// #define DEBUG_GICV2
#ifdef DEBUG_GICV2
#define dkprintf(...) kprintf(__VA_ARGS__)
#define ekprintf(...) kprintf(__VA_ARGS__)
#else
#define dkprintf(...)
#define ekprintf(...) kprintf(__VA_ARGS__)
#endif
void *dist_base;
void *cpu_base;
#define gic_hwid_to_affinity(hw_cpuid) (1UL << hw_cpuid)
/**
* arm64_raise_sgi_gicv2
* @ref.impl drivers/irqchip/irq-gic.c:gic_raise_softirq
*
* @note Because it performs interrupt control at a higher
* function, it is not necessary to perform the disable/enable
* interrupts in this function as gic_raise_softirq() .
*/
static void arm64_raise_sgi_gicv2(unsigned int cpuid, unsigned int vector)
{
/* Build interrupt destination of the target cpu */
unsigned int hw_cpuid = ihk_mc_get_cpu_info()->hw_ids[cpuid];
uint8_t cpu_target_list = gic_hwid_to_affinity(hw_cpuid);
/*
* Ensure that stores to Normal memory are visible to the
* other CPUs before they observe us issuing the IPI.
*/
dmb(ishst);
/* write to GICD_SGIR */
writel_relaxed(
cpu_target_list << 16 | vector,
(void *)(dist_base + GIC_DIST_SOFTINT)
);
}
/**
* arm64_raise_spi_gicv2
* @ref.impl nothing.
*/
extern unsigned int ihk_ikc_irq_apicid;
static void arm64_raise_spi_gicv2(unsigned int cpuid, unsigned int vector)
{
uint64_t spi_reg_offset;
uint32_t spi_set_pending_bitpos;
if (cpuid != ihk_ikc_irq_apicid) {
ekprintf("SPI(irq#%d) cannot send other than the host.\n", vector);
return;
}
/**
* calculates register offset and bit position corresponding to the numbers.
*
* For interrupt vector m,
* - the corresponding GICD_ISPENDR number, n, is given by n = m / 32
* - the offset of the required GICD_ISPENDR is (0x200 + (4*n))
* - the bit number of the required Set-pending bit in this register is m % 32.
*/
spi_reg_offset = vector / 32 * 4;
spi_set_pending_bitpos = vector % 32;
/* write to GICD_ISPENDR */
writel_relaxed(
1 << spi_set_pending_bitpos,
(void *)(dist_base + GIC_DIST_PENDING_SET + spi_reg_offset)
);
}
/**
* arm64_issue_ipi_gicv2
* @param cpuid : hardware cpu id
* @param vector : interrupt vector number
*/
void arm64_issue_ipi_gicv2(unsigned int cpuid, unsigned int vector)
{
dkprintf("Send irq#%d to cpuid=%d\n", vector, cpuid);
if(vector < 16){
// send SGI
arm64_raise_sgi_gicv2(cpuid, vector);
} else if (32 <= vector && vector < 1020) {
// send SPI (allow only to host)
arm64_raise_spi_gicv2(cpuid, vector);
} else {
ekprintf("#%d is bad irq number.", vector);
}
}
/**
* handle_interrupt_gicv2
* @ref.impl drivers/irqchip/irq-gic.c:gic_handle_irq
*/
extern int interrupt_from_user(void *);
void handle_interrupt_gicv2(struct pt_regs *regs)
{
unsigned int irqstat, irqnr;
set_cputime(interrupt_from_user(regs)? 1: 2);
do {
// get GICC_IAR.InterruptID
irqstat = readl_relaxed(cpu_base + GIC_CPU_INTACK);
irqnr = irqstat & GICC_IAR_INT_ID_MASK;
if (irqnr < 32) {
writel_relaxed(irqstat, cpu_base + GIC_CPU_EOI);
handle_IPI(irqnr, regs);
continue;
} else if (irqnr != 1023) {
panic("PANIC: handle_interrupt_gicv2(): catch invalid interrupt.");
}
/*
* If another interrupt is not pending, GICC_IAR.InterruptID
* returns 1023 (see GICv2 spec. Chap. 4.4.4) .
*/
break;
} while (1);
set_cputime(0);
}
void gic_dist_init_gicv2(unsigned long dist_base_pa, unsigned long size)
{
dist_base = map_fixed_area(dist_base_pa, size, 1 /*non chachable*/);
}
void gic_cpu_init_gicv2(unsigned long cpu_base_pa, unsigned long size)
{
cpu_base = map_fixed_area(cpu_base_pa, size, 1 /*non chachable*/);
}
void gic_enable_gicv2(void)
{
unsigned int enable_ppi_sgi = 0;
if (is_use_virt_timer()) {
enable_ppi_sgi |= GICD_ENABLE << get_virt_timer_intrid();
} else {
enable_ppi_sgi |= GICD_ENABLE << get_phys_timer_intrid();
}
writel_relaxed(enable_ppi_sgi, dist_base + GIC_DIST_ENABLE_SET);
}

View File

@ -0,0 +1,416 @@
/* irq-gic-v3.c COPYRIGHT FUJITSU LIMITED 2015-2017 */
#include <irq.h>
#include <arm-gic-v2.h>
#include <arm-gic-v3.h>
#include <io.h>
#include <cputype.h>
#include <process.h>
#include <syscall.h>
//#define DEBUG_GICV3
#define USE_CAVIUM_THUNDER_X
#ifdef DEBUG_GICV3
#define dkprintf(...) kprintf(__VA_ARGS__)
#define ekprintf(...) kprintf(__VA_ARGS__)
#else
#define dkprintf(...)
#define ekprintf(...) kprintf(__VA_ARGS__)
#endif
#ifdef USE_CAVIUM_THUNDER_X
static char is_cavium_thunderx = 0;
#endif
void *dist_base;
void *rdist_base[NR_CPUS];
extern uint64_t ihk_param_cpu_logical_map;
static uint64_t *__cpu_logical_map = &ihk_param_cpu_logical_map;
extern uint64_t ihk_param_gic_rdist_base_pa[NR_CPUS];
#define cpu_logical_map(cpu) __cpu_logical_map[cpu]
/* Our default, arbitrary priority value. Linux only uses one anyway. */
#define DEFAULT_PMR_VALUE 0xf0
/**
* Low level accessors
* @ref.impl host-kernel/drivers/irqchip/irq-gic-v3.c
*/
static uint64_t gic_read_iar_common(void)
{
uint64_t irqstat;
#ifdef CONFIG_HAS_NMI
uint64_t daif;
uint64_t pmr;
uint64_t default_pmr_value = DEFAULT_PMR_VALUE;
/*
* The PMR may be configured to mask interrupts when this code is
* called, thus in order to acknowledge interrupts we must set the
* PMR to its default value before reading from the IAR.
*
* To do this without taking an interrupt we also ensure the I bit
* is set whilst we are interfering with the value of the PMR.
*/
asm volatile(
"mrs %1, daif\n\t" /* save I bit */
"msr daifset, #2\n\t" /* set I bit */
"mrs_s %2, " __stringify(ICC_PMR_EL1) "\n\t" /* save PMR */
"msr_s " __stringify(ICC_PMR_EL1) ",%3\n\t" /* set PMR */
"mrs_s %0, " __stringify(ICC_IAR1_EL1) "\n\t" /* ack int */
"msr_s " __stringify(ICC_PMR_EL1) ",%2\n\t" /* restore PMR */
"isb\n\t"
"msr daif, %1" /* restore I */
: "=r" (irqstat), "=&r" (daif), "=&r" (pmr)
: "r" (default_pmr_value));
#else /* CONFIG_HAS_NMI */
asm volatile("mrs_s %0, " __stringify(ICC_IAR1_EL1) : "=r" (irqstat));
#endif /* CONFIG_HAS_NMI */
return irqstat;
}
#ifdef USE_CAVIUM_THUNDER_X
/* Cavium ThunderX erratum 23154 */
static uint64_t gic_read_iar_cavium_thunderx(void)
{
uint64_t irqstat;
#ifdef CONFIG_HAS_NMI
uint64_t daif;
uint64_t pmr;
uint64_t default_pmr_value = DEFAULT_PMR_VALUE;
/*
* The PMR may be configured to mask interrupts when this code is
* called, thus in order to acknowledge interrupts we must set the
* PMR to its default value before reading from the IAR.
*
* To do this without taking an interrupt we also ensure the I bit
* is set whilst we are interfering with the value of the PMR.
*/
asm volatile(
"mrs %1, daif\n\t" /* save I bit */
"msr daifset, #2\n\t" /* set I bit */
"mrs_s %2, " __stringify(ICC_PMR_EL1) "\n\t" /* save PMR */
"msr_s " __stringify(ICC_PMR_EL1) ",%3\n\t" /* set PMR */
"nop;nop;nop;nop\n\t"
"nop;nop;nop;nop\n\t"
"mrs_s %0, " __stringify(ICC_IAR1_EL1) "\n\t" /* ack int */
"nop;nop;nop;nop\n\t"
"msr_s " __stringify(ICC_PMR_EL1) ",%2\n\t" /* restore PMR */
"isb\n\t"
"msr daif, %1" /* restore I */
: "=r" (irqstat), "=&r" (daif), "=&r" (pmr)
: "r" (default_pmr_value));
#else /* CONFIG_HAS_NMI */
asm volatile("nop;nop;nop;nop;");
asm volatile("nop;nop;nop;nop;");
asm volatile("mrs_s %0, " __stringify(ICC_IAR1_EL1) : "=r" (irqstat));
asm volatile("nop;nop;nop;nop;");
#endif /* CONFIG_HAS_NMI */
mb();
return irqstat;
}
#endif
static uint64_t gic_read_iar(void)
{
#ifdef USE_CAVIUM_THUNDER_X
if (is_cavium_thunderx)
return gic_read_iar_cavium_thunderx();
else
#endif
return gic_read_iar_common();
}
static void gic_write_pmr(uint64_t val)
{
asm volatile("msr_s " __stringify(ICC_PMR_EL1) ", %0" : : "r" (val));
}
static void gic_write_ctlr(uint64_t val)
{
asm volatile("msr_s " __stringify(ICC_CTLR_EL1) ", %0" : : "r" (val));
isb();
}
static void gic_write_grpen1(uint64_t val)
{
asm volatile("msr_s " __stringify(ICC_GRPEN1_EL1) ", %0" : : "r" (val));
isb();
}
static inline void gic_write_eoir(uint64_t irq)
{
asm volatile("msr_s " __stringify(ICC_EOIR1_EL1) ", %0" : : "r" (irq));
isb();
}
static void gic_write_sgi1r(uint64_t val)
{
asm volatile("msr_s " __stringify(ICC_SGI1R_EL1) ", %0" : : "r" (val));
}
static inline uint32_t gic_read_sre(void)
{
uint64_t val;
asm volatile("mrs_s %0, " __stringify(ICC_SRE_EL1) : "=r" (val));
return val;
}
static inline void gic_write_sre(uint32_t val)
{
asm volatile("msr_s " __stringify(ICC_SRE_EL1) ", %0" : : "r" ((uint64_t)val));
isb();
}
static uint32_t gic_enable_sre(void)
{
uint32_t val;
val = gic_read_sre();
if (val & ICC_SRE_EL1_SRE)
return 1; /*ok*/
val |= ICC_SRE_EL1_SRE;
gic_write_sre(val);
val = gic_read_sre();
return !!(val & ICC_SRE_EL1_SRE);
}
#ifdef CONFIG_HAS_NMI
static inline void gic_write_bpr1(uint32_t val)
{
asm volatile("msr_s " __stringify(ICC_BPR1_EL1) ", %0" : : "r" (val));
}
#endif
static void arm64_raise_sgi_gicv3(uint32_t cpuid, uint32_t vector)
{
uint64_t mpidr, cluster_id;
uint16_t tlist;
uint64_t val;
/* Build interrupt destination of the target cpu */
uint32_t hw_cpuid = ihk_mc_get_cpu_info()->hw_ids[cpuid];
/*
* Ensure that stores to Normal memory are visible to the
* other CPUs before issuing the IPI.
*/
smp_wmb();
mpidr = cpu_logical_map(hw_cpuid);
if((mpidr & 0xffUL) < 16) {
cluster_id = cpu_logical_map(hw_cpuid) & ~0xffUL;
tlist = (uint16_t)(1 << (mpidr & 0xf));
#define MPIDR_TO_SGI_AFFINITY(cluster_id, level) \
(MPIDR_AFFINITY_LEVEL(cluster_id, level) \
<< ICC_SGI1R_AFFINITY_## level ##_SHIFT)
val = (MPIDR_TO_SGI_AFFINITY(cluster_id, 3) |
MPIDR_TO_SGI_AFFINITY(cluster_id, 2) |
vector << ICC_SGI1R_SGI_ID_SHIFT |
MPIDR_TO_SGI_AFFINITY(cluster_id, 1) |
tlist << ICC_SGI1R_TARGET_LIST_SHIFT);
dkprintf("CPU%d: ICC_SGI1R_EL1 %llx\n", ihk_mc_get_processor_id(), val);
gic_write_sgi1r(val);
/* Force the above writes to ICC_SGI1R_EL1 to be executed */
isb();
} else {
/*
* If we ever get a cluster of more than 16 CPUs, just
* scream and skip that CPU.
*/
ekprintf("GICv3 can't send SGI for TargetList=%d\n", (mpidr & 0xffUL));
}
}
static void arm64_raise_spi_gicv3(uint32_t cpuid, uint32_t vector)
{
uint64_t spi_reg_offset;
uint32_t spi_set_pending_bitpos;
/**
* calculates register offset and bit position corresponding to the numbers.
*
* For interrupt vector m,
* - the corresponding GICD_ISPENDR number, n, is given by n = m / 32
* - the offset of the required GICD_ISPENDR is (0x200 + (4*n))
* - the bit number of the required Set-pending bit in this register is m % 32.
*/
spi_reg_offset = vector / 32 * 4;
spi_set_pending_bitpos = vector % 32;
/* write to GICD_ISPENDR */
writel_relaxed(
1 << spi_set_pending_bitpos,
(void *)(dist_base + GICD_ISPENDR + spi_reg_offset)
);
}
static void arm64_raise_lpi_gicv3(uint32_t cpuid, uint32_t vector)
{
// @todo.impl
}
void arm64_issue_ipi_gicv3(uint32_t cpuid, uint32_t vector)
{
dkprintf("Send irq#%d to cpuid=%d\n", vector, cpuid);
barrier();
if(vector < 16){
// send SGI
arm64_raise_sgi_gicv3(cpuid, vector);
} else if (32 <= vector && vector < 1020) {
// send SPI (allow only to host)
arm64_raise_spi_gicv3(cpuid, vector);
} else if (8192 <= vector) {
// send LPI (allow only to host)
arm64_raise_lpi_gicv3(cpuid, vector);
} else {
ekprintf("#%d is bad irq number.", vector);
}
}
extern int interrupt_from_user(void *);
void handle_interrupt_gicv3(struct pt_regs *regs)
{
uint64_t irqnr;
irqnr = gic_read_iar();
cpu_enable_nmi();
set_cputime(interrupt_from_user(regs)? 1: 2);
while (irqnr != ICC_IAR1_EL1_SPURIOUS) {
if ((irqnr < 1020) || (irqnr >= 8192)) {
gic_write_eoir(irqnr);
handle_IPI(irqnr, regs);
}
irqnr = gic_read_iar();
}
set_cputime(0);
}
void gic_dist_init_gicv3(unsigned long dist_base_pa, unsigned long size)
{
dist_base = map_fixed_area(dist_base_pa, size, 1 /*non chachable*/);
#ifdef USE_CAVIUM_THUNDER_X
/* Cavium ThunderX erratum 23154 */
if (MIDR_IMPLEMENTOR(read_cpuid_id()) == ARM_CPU_IMP_CAVIUM) {
is_cavium_thunderx = 1;
}
#endif
}
void gic_cpu_init_gicv3(unsigned long cpu_base_pa, unsigned long size)
{
int32_t cpuid, hw_cpuid;
struct ihk_mc_cpu_info *cpu_info = ihk_mc_get_cpu_info();
for(cpuid = 0; cpuid < cpu_info->ncpus; cpuid++) {
hw_cpuid = cpu_info->hw_ids[cpuid];
if(ihk_param_gic_rdist_base_pa[hw_cpuid] != 0) {
rdist_base[hw_cpuid] =
map_fixed_area(ihk_param_gic_rdist_base_pa[hw_cpuid], size, 1 /*non chachable*/);
}
}
}
static void gic_do_wait_for_rwp(void *base)
{
uint32_t count = 1000000; /* 1s! */
while (readl_relaxed(base + GICD_CTLR) & GICD_CTLR_RWP) {
count--;
if (!count) {
ekprintf("RWP timeout, gone fishing\n");
return;
}
cpu_pause();
};
}
void gic_enable_gicv3(void)
{
void *rbase = rdist_base[ihk_mc_get_hardware_processor_id()];
void *rd_sgi_base = rbase + 0x10000 /* SZ_64K */;
int i;
unsigned int enable_ppi_sgi = GICD_INT_EN_SET_SGI;
if (is_use_virt_timer()) {
enable_ppi_sgi |= GICD_ENABLE << get_virt_timer_intrid();
} else {
enable_ppi_sgi |= GICD_ENABLE << get_phys_timer_intrid();
}
/*
* Deal with the banked PPI and SGI interrupts - disable all
* PPI interrupts, ensure all SGI interrupts are enabled.
*/
writel_relaxed(~enable_ppi_sgi, rd_sgi_base + GIC_DIST_ENABLE_CLEAR);
writel_relaxed(enable_ppi_sgi, rd_sgi_base + GIC_DIST_ENABLE_SET);
/*
* Set priority on PPI and SGI interrupts
*/
for (i = 0; i < 32; i += 4)
writel_relaxed(GICD_INT_DEF_PRI_X4,
rd_sgi_base + GIC_DIST_PRI + i * 4 / 4);
/* sync wait */
gic_do_wait_for_rwp(rbase);
/*
* Need to check that the SRE bit has actually been set. If
* not, it means that SRE is disabled at EL2. We're going to
* die painfully, and there is nothing we can do about it.
*
* Kindly inform the luser.
*/
if (!gic_enable_sre())
panic("GIC: unable to set SRE (disabled at EL2), panic ahead\n");
#ifndef CONFIG_HAS_NMI
/* Set priority mask register */
gic_write_pmr(DEFAULT_PMR_VALUE);
#endif
/* EOI deactivates interrupt too (mode 0) */
gic_write_ctlr(ICC_CTLR_EL1_EOImode_drop_dir);
/* ... and let's hit the road... */
gic_write_grpen1(1);
#ifdef CONFIG_HAS_NMI
/*
* Some firmwares hand over to the kernel with the BPR changed from
* its reset value (and with a value large enough to prevent
* any pre-emptive interrupts from working at all). Writing a zero
* to BPR restores is reset value.
*/
gic_write_bpr1(0);
/* Set specific IPI to NMI */
writeb_relaxed(GICD_INT_NMI_PRI, rd_sgi_base + GIC_DIST_PRI + INTRID_CPU_STOP);
writeb_relaxed(GICD_INT_NMI_PRI, rd_sgi_base + GIC_DIST_PRI + INTRID_MEMDUMP);
writeb_relaxed(GICD_INT_NMI_PRI, rd_sgi_base + GIC_DIST_PRI + INTRID_STACK_TRACE);
/* sync wait */
gic_do_wait_for_rwp(rbase);
#endif /* CONFIG_HAS_NMI */
}

88
arch/arm64/kernel/local.c Normal file
View File

@ -0,0 +1,88 @@
/* local.c COPYRIGHT FUJITSU LIMITED 2015-2016 */
#include <cpulocal.h>
#include <ihk/atomic.h>
#include <ihk/mm.h>
#include <ihk/cpu.h>
#include <ihk/debug.h>
#include <registers.h>
#include <string.h>
#define LOCALS_SPAN (8 * PAGE_SIZE)
/* BSP initialized stack area */
union arm64_cpu_local_variables init_thread_info __attribute__((aligned(KERNEL_STACK_SIZE)));
/* BSP/AP idle stack pointer head */
static union arm64_cpu_local_variables *locals;
size_t arm64_cpu_local_variables_span = LOCALS_SPAN; /* for debugger */
/* allocate & initialize BSP/AP idle stack */
void init_processors_local(int max_id)
{
int i = 0;
const int sz = (max_id + 1) * KERNEL_STACK_SIZE;
union arm64_cpu_local_variables *tmp;
/* allocate one more for alignment */
locals = ihk_mc_alloc_pages(((sz + PAGE_SIZE - 1) / PAGE_SIZE), IHK_MC_AP_CRITICAL);
locals = (union arm64_cpu_local_variables *)ALIGN_UP((unsigned long)locals, KERNEL_STACK_SIZE);
/* clear struct process, struct process_vm, struct thread_info area */
for (i = 0, tmp = locals; i < max_id; i++, tmp++) {
memset(tmp, 0, sizeof(struct thread_info));
}
kprintf("locals = %p\n", locals);
}
/* get id (logical processor id) local variable address */
union arm64_cpu_local_variables *get_arm64_cpu_local_variable(int id)
{
return locals + id;
}
/* get id (logical processor id) kernel stack address */
static void *get_arm64_cpu_local_kstack(int id)
{
return (char *)get_arm64_cpu_local_variable(id) + THREAD_START_SP;
}
/* get current cpu local variable address */
union arm64_cpu_local_variables *get_arm64_this_cpu_local(void)
{
int id = ihk_mc_get_processor_id();
return get_arm64_cpu_local_variable(id);
}
/* get current kernel stack address */
void *get_arm64_this_cpu_kstack(void)
{
int id = ihk_mc_get_processor_id();
return get_arm64_cpu_local_kstack(id);
}
/* assign logical processor id for current_thread_info.cpu */
/* logical processor id BSP:0, AP0:1, AP1:2, ... APn:n-1 */
static ihk_atomic_t last_processor_id = IHK_ATOMIC_INIT(-1);
void assign_processor_id(void)
{
int id;
union arm64_cpu_local_variables *v;
id = ihk_atomic_inc_return(&last_processor_id);
v = get_arm64_cpu_local_variable(id);
v->arm64_cpu_local_thread.thread_info.cpu = id;
}
/** IHK **/
/* get current logical processor id */
int ihk_mc_get_processor_id(void)
{
return current_thread_info()->cpu;
}
/* get current physical processor id (not equal AFFINITY !!) */
int ihk_mc_get_hardware_processor_id(void)
{
return ihk_mc_get_cpu_info()->hw_ids[ihk_mc_get_processor_id()];
}

View File

@ -0,0 +1,78 @@
/* memcpy.S COPYRIGHT FUJITSU LIMITED 2017 */
/*
* Copyright (C) 2013 ARM Ltd.
* Copyright (C) 2013 Linaro.
*
* This code is based on glibc cortex strings work originally authored by Linaro
* and re-licensed under GPLv2 for the Linux kernel. The original code can
* be found @
*
* http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/
* files/head:/src/aarch64/
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linkage.h>
#include <assembler.h>
#include <cache.h>
/*
* Copy a buffer from src to dest (alignment handled by the hardware)
*
* Parameters:
* x0 - dest
* x1 - src
* x2 - n
* Returns:
* x0 - dest
*/
.macro ldrb1 ptr, regB, val
ldrb \ptr, [\regB], \val
.endm
.macro strb1 ptr, regB, val
strb \ptr, [\regB], \val
.endm
.macro ldrh1 ptr, regB, val
ldrh \ptr, [\regB], \val
.endm
.macro strh1 ptr, regB, val
strh \ptr, [\regB], \val
.endm
.macro ldr1 ptr, regB, val
ldr \ptr, [\regB], \val
.endm
.macro str1 ptr, regB, val
str \ptr, [\regB], \val
.endm
.macro ldp1 ptr, regB, regC, val
ldp \ptr, \regB, [\regC], \val
.endm
.macro stp1 ptr, regB, regC, val
stp \ptr, \regB, [\regC], \val
.endm
.weak memcpy
ENTRY(____inline_memcpy)
ENTRY(__inline_memcpy)
#include "copy_template.S"
ret
ENDPIPROC(__inline_memcpy)
ENDPROC(____inline_memcpy)

3173
arch/arm64/kernel/memory.c Normal file

File diff suppressed because it is too large Load Diff

220
arch/arm64/kernel/memset.S Normal file
View File

@ -0,0 +1,220 @@
/* memset.S COPYRIGHT FUJITSU LIMITED 2017 */
/*
* Copyright (C) 2013 ARM Ltd.
* Copyright (C) 2013 Linaro.
*
* This code is based on glibc cortex strings work originally authored by Linaro
* and re-licensed under GPLv2 for the Linux kernel. The original code can
* be found @
*
* http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/
* files/head:/src/aarch64/
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linkage.h>
#include <assembler.h>
#include <cache.h>
/*
* Fill in the buffer with character c (alignment handled by the hardware)
*
* Parameters:
* x0 - buf
* x1 - c
* x2 - n
* Returns:
* x0 - buf
*/
dstin .req x0
val .req w1
count .req x2
tmp1 .req x3
tmp1w .req w3
tmp2 .req x4
tmp2w .req w4
zva_len_x .req x5
zva_len .req w5
zva_bits_x .req x6
A_l .req x7
A_lw .req w7
dst .req x8
tmp3w .req w9
tmp3 .req x9
.weak memset
ENTRY(____inline_memset)
ENTRY(__inline_memset)
mov dst, dstin /* Preserve return value. */
and A_lw, val, #255
orr A_lw, A_lw, A_lw, lsl #8
orr A_lw, A_lw, A_lw, lsl #16
orr A_l, A_l, A_l, lsl #32
cmp count, #15
b.hi .Lover16_proc
/*All store maybe are non-aligned..*/
tbz count, #3, 1f
str A_l, [dst], #8
1:
tbz count, #2, 2f
str A_lw, [dst], #4
2:
tbz count, #1, 3f
strh A_lw, [dst], #2
3:
tbz count, #0, 4f
strb A_lw, [dst]
4:
ret
.Lover16_proc:
/*Whether the start address is aligned with 16.*/
neg tmp2, dst
ands tmp2, tmp2, #15
b.eq .Laligned
/*
* The count is not less than 16, we can use stp to store the start 16 bytes,
* then adjust the dst aligned with 16.This process will make the current
* memory address at alignment boundary.
*/
stp A_l, A_l, [dst] /*non-aligned store..*/
/*make the dst aligned..*/
sub count, count, tmp2
add dst, dst, tmp2
.Laligned:
cbz A_l, .Lzero_mem
.Ltail_maybe_long:
cmp count, #64
b.ge .Lnot_short
.Ltail63:
ands tmp1, count, #0x30
b.eq 3f
cmp tmp1w, #0x20
b.eq 1f
b.lt 2f
stp A_l, A_l, [dst], #16
1:
stp A_l, A_l, [dst], #16
2:
stp A_l, A_l, [dst], #16
/*
* The last store length is less than 16,use stp to write last 16 bytes.
* It will lead some bytes written twice and the access is non-aligned.
*/
3:
ands count, count, #15
cbz count, 4f
add dst, dst, count
stp A_l, A_l, [dst, #-16] /* Repeat some/all of last store. */
4:
ret
/*
* Critical loop. Start at a new cache line boundary. Assuming
* 64 bytes per line, this ensures the entire loop is in one line.
*/
.p2align L1_CACHE_SHIFT
.Lnot_short:
sub dst, dst, #16/* Pre-bias. */
sub count, count, #64
1:
stp A_l, A_l, [dst, #16]
stp A_l, A_l, [dst, #32]
stp A_l, A_l, [dst, #48]
stp A_l, A_l, [dst, #64]!
subs count, count, #64
b.ge 1b
tst count, #0x3f
add dst, dst, #16
b.ne .Ltail63
.Lexitfunc:
ret
/*
* For zeroing memory, check to see if we can use the ZVA feature to
* zero entire 'cache' lines.
*/
.Lzero_mem:
cmp count, #63
b.le .Ltail63
/*
* For zeroing small amounts of memory, it's not worth setting up
* the line-clear code.
*/
cmp count, #128
b.lt .Lnot_short /*count is at least 128 bytes*/
mrs tmp1, dczid_el0
tbnz tmp1, #4, .Lnot_short
mov tmp3w, #4
and zva_len, tmp1w, #15 /* Safety: other bits reserved. */
lsl zva_len, tmp3w, zva_len
ands tmp3w, zva_len, #63
/*
* ensure the zva_len is not less than 64.
* It is not meaningful to use ZVA if the block size is less than 64.
*/
b.ne .Lnot_short
.Lzero_by_line:
/*
* Compute how far we need to go to become suitably aligned. We're
* already at quad-word alignment.
*/
cmp count, zva_len_x
b.lt .Lnot_short /* Not enough to reach alignment. */
sub zva_bits_x, zva_len_x, #1
neg tmp2, dst
ands tmp2, tmp2, zva_bits_x
b.eq 2f /* Already aligned. */
/* Not aligned, check that there's enough to copy after alignment.*/
sub tmp1, count, tmp2
/*
* grantee the remain length to be ZVA is bigger than 64,
* avoid to make the 2f's process over mem range.*/
cmp tmp1, #64
ccmp tmp1, zva_len_x, #8, ge /* NZCV=0b1000 */
b.lt .Lnot_short
/*
* We know that there's at least 64 bytes to zero and that it's safe
* to overrun by 64 bytes.
*/
mov count, tmp1
1:
stp A_l, A_l, [dst]
stp A_l, A_l, [dst, #16]
stp A_l, A_l, [dst, #32]
subs tmp2, tmp2, #64
stp A_l, A_l, [dst, #48]
add dst, dst, #64
b.ge 1b
/* We've overrun a bit, so adjust dst downwards.*/
add dst, dst, tmp2
2:
sub count, count, zva_len_x
3:
dc zva, dst
add dst, dst, zva_len_x
subs count, count, zva_len_x
b.ge 3b
ands count, count, zva_bits_x
b.ne .Ltail_maybe_long
ret
ENDPIPROC(__inline_memset)
ENDPROC(____inline_memset)

44
arch/arm64/kernel/mikc.c Normal file
View File

@ -0,0 +1,44 @@
/* mikc.c COPYRIGHT FUJITSU LIMITED 2015-2016 */
#include <ihk/ikc.h>
#include <ihk/lock.h>
#include <ikc/msg.h>
#include <memory.h>
#include <string.h>
extern int num_processors;
extern void arch_set_mikc_queue(void *r, void *w);
ihk_ikc_ph_t arch_master_channel_packet_handler;
int ihk_mc_ikc_init_first_local(struct ihk_ikc_channel_desc *channel,
ihk_ikc_ph_t packet_handler)
{
struct ihk_ikc_queue_head *rq, *wq;
size_t mikc_queue_pages;
ihk_ikc_system_init(NULL);
memset(channel, 0, sizeof(struct ihk_ikc_channel_desc));
mikc_queue_pages = ((2 * num_processors * MASTER_IKCQ_PKTSIZE)
+ (PAGE_SIZE - 1)) / PAGE_SIZE;
/* Place both sides in this side */
rq = ihk_mc_alloc_pages(mikc_queue_pages, IHK_MC_AP_CRITICAL);
wq = ihk_mc_alloc_pages(mikc_queue_pages, IHK_MC_AP_CRITICAL);
ihk_ikc_init_queue(rq, 0, 0,
mikc_queue_pages * PAGE_SIZE, MASTER_IKCQ_PKTSIZE);
ihk_ikc_init_queue(wq, 0, 0,
mikc_queue_pages * PAGE_SIZE, MASTER_IKCQ_PKTSIZE);
arch_master_channel_packet_handler = packet_handler;
ihk_ikc_init_desc(channel, IKC_OS_HOST, 0, rq, wq,
ihk_ikc_master_channel_packet_handler, channel);
ihk_ikc_enable_channel(channel);
/* Set boot parameter */
arch_set_mikc_queue(rq, wq);
return 0;
}

156
arch/arm64/kernel/perfctr.c Normal file
View File

@ -0,0 +1,156 @@
/* perfctr.c COPYRIGHT FUJITSU LIMITED 2015-2017 */
#include <arch-perfctr.h>
#include <ihk/perfctr.h>
#include <mc_perf_event.h>
#include <errno.h>
#include <ihk/debug.h>
#include <registers.h>
#include <string.h>
/*
* @ref.impl arch/arm64/kernel/perf_event.c
* Set at runtime when we know what CPU type we are.
*/
struct arm_pmu cpu_pmu;
extern int ihk_param_pmu_irq_affiniry[CONFIG_SMP_MAX_CORES];
extern int ihk_param_nr_pmu_irq_affiniry;
int arm64_init_perfctr(void)
{
int ret;
int i;
memset(&cpu_pmu, 0, sizeof(cpu_pmu));
ret = armv8pmu_init(&cpu_pmu);
if (!ret) {
return ret;
}
for (i = 0; i < ihk_param_nr_pmu_irq_affiniry; i++) {
ret = ihk_mc_register_interrupt_handler(ihk_param_pmu_irq_affiniry[i], cpu_pmu.handler);
}
return ret;
}
int arm64_enable_pmu(void)
{
int ret;
if (cpu_pmu.reset) {
cpu_pmu.reset(&cpu_pmu);
}
ret = cpu_pmu.enable_pmu();
return ret;
}
void arm64_disable_pmu(void)
{
cpu_pmu.disable_pmu();
}
extern unsigned int *arm64_march_perfmap;
static int __ihk_mc_perfctr_init(int counter, uint32_t type, uint64_t config, int mode)
{
int ret;
unsigned long config_base = 0;
int mapping;
mapping = cpu_pmu.map_event(type, config);
if (mapping < 0) {
return mapping;
}
ret = cpu_pmu.disable_counter(counter);
if (!ret) {
return ret;
}
ret = cpu_pmu.enable_intens(counter);
if (!ret) {
return ret;
}
ret = cpu_pmu.set_event_filter(&config_base, mode);
if (!ret) {
return ret;
}
config_base |= (unsigned long)mapping;
cpu_pmu.write_evtype(counter, config_base);
return ret;
}
int ihk_mc_perfctr_init_raw(int counter, uint64_t config, int mode)
{
int ret;
ret = __ihk_mc_perfctr_init(counter, PERF_TYPE_RAW, config, mode);
return ret;
}
int ihk_mc_perfctr_init(int counter, uint64_t config, int mode)
{
int ret;
ret = __ihk_mc_perfctr_init(counter, PERF_TYPE_RAW, config, mode);
return ret;
}
int ihk_mc_perfctr_start(int counter)
{
int ret;
ret = cpu_pmu.enable_counter(counter);
return ret;
}
int ihk_mc_perfctr_stop(int counter)
{
cpu_pmu.disable_counter(counter);
// ihk_mc_perfctr_startが呼ばれるときには、
// init系関数が呼ばれるのでdisableにする。
cpu_pmu.disable_intens(counter);
return 0;
}
int ihk_mc_perfctr_reset(int counter)
{
// TODO[PMU]: ihk_mc_perfctr_setと同様にサンプリングレートの共通部実装の扱いを見てから本実装。
cpu_pmu.write_counter(counter, 0);
return 0;
}
//int ihk_mc_perfctr_set(int counter, unsigned long val)
int ihk_mc_perfctr_set(int counter, long val) /* 0416_patchtemp */
{
// TODO[PMU]: 共通部でサンプリングレートの計算をして、設定するカウンタ値をvalに渡してくるようになると想定。サンプリングレートの扱いを見てから本実装。
uint32_t v = val;
cpu_pmu.write_counter(counter, v);
return 0;
}
int ihk_mc_perfctr_read_mask(unsigned long counter_mask, unsigned long *value)
{
/* this function not used yet. */
panic("not implemented.");
return 0;
}
unsigned long ihk_mc_perfctr_read(int counter)
{
unsigned long count;
count = cpu_pmu.read_counter(counter);
return count;
}
//int ihk_mc_perfctr_alloc_counter(unsigned long pmc_status)
int ihk_mc_perfctr_alloc_counter(unsigned int *type, unsigned long *config, unsigned long pmc_status) /* 0416_patchtemp */
{
int ret;
ret = cpu_pmu.get_event_idx(cpu_pmu.num_events, pmc_status);
return ret;
}
/* 0416_patchtemp */
/* ihk_mc_perfctr_fixed_init() stub added. */
int ihk_mc_perfctr_fixed_init(int counter, int mode)
{
return -1;
}

View File

@ -0,0 +1,653 @@
/* perfctr_armv8pmu.c COPYRIGHT FUJITSU LIMITED 2016-2017 */
#include <arch-perfctr.h>
#include <mc_perf_event.h>
#include <ihk/perfctr.h>
#include <errno.h>
#include <ihk/debug.h>
#define BIT(nr) (1UL << (nr))
//#define DEBUG_PRINT_PMU
#ifdef DEBUG_PRINT_PMU
#define dkprintf(...) kprintf(__VA_ARGS__)
#define ekprintf(...) kprintf(__VA_ARGS__)
#else
#define dkprintf(...) do { if (0) kprintf(__VA_ARGS__); } while (0)
#define ekprintf(...) kprintf(__VA_ARGS__)
#endif
/*
* @ref.impl arch/arm64/kernel/perf_event.c
* Perf Events' indices
*/
#define ARMV8_IDX_CYCLE_COUNTER 0
#define ARMV8_IDX_COUNTER0 1
#define ARMV8_IDX_COUNTER_LAST (ARMV8_IDX_CYCLE_COUNTER + get_cpu_pmu()->num_events - 1)
#define ARMV8_MAX_COUNTERS 32
#define ARMV8_COUNTER_MASK (ARMV8_MAX_COUNTERS - 1)
/*
* ARMv8 low level PMU access
*/
/*
* @ref.impl arch/arm64/kernel/perf_event.c
* Perf Event to low level counters mapping
*/
#define ARMV8_IDX_TO_COUNTER(x) \
(((x) - ARMV8_IDX_COUNTER0) & ARMV8_COUNTER_MASK)
/*
* @ref.impl arch/arm64/kernel/perf_event.c
* Per-CPU PMCR: config reg
*/
#define ARMV8_PMCR_E (1 << 0) /* Enable all counters */
#define ARMV8_PMCR_P (1 << 1) /* Reset all counters */
#define ARMV8_PMCR_C (1 << 2) /* Cycle counter reset */
#define ARMV8_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
#define ARMV8_PMCR_X (1 << 4) /* Export to ETM */
#define ARMV8_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
#define ARMV8_PMCR_N_SHIFT 11 /* Number of counters supported */
#define ARMV8_PMCR_N_MASK 0x1f
#define ARMV8_PMCR_MASK 0x3f /* Mask for writable bits */
/*
* @ref.impl arch/arm64/kernel/perf_event.c
* PMOVSR: counters overflow flag status reg
*/
#define ARMV8_OVSR_MASK 0xffffffff /* Mask for writable bits */
#define ARMV8_OVERFLOWED_MASK ARMV8_OVSR_MASK
/*
* @ref.impl arch/arm64/kernel/perf_event.c
* PMXEVTYPER: Event selection reg
*/
#define ARMV8_EVTYPE_MASK 0xc80003ff /* Mask for writable bits */
#define ARMV8_EVTYPE_EVENT 0x3ff /* Mask for EVENT bits */
/*
* @ref.impl arch/arm64/kernel/perf_event.c
* Event filters for PMUv3
*/
#define ARMV8_EXCLUDE_EL1 (1 << 31)
#define ARMV8_EXCLUDE_EL0 (1 << 30)
#define ARMV8_INCLUDE_EL2 (1 << 27)
/*
* @ref.impl arch/arm64/kernel/perf_event.c
* ARMv8 PMUv3 Performance Events handling code.
* Common event types.
*/
enum armv8_pmuv3_perf_types {
/* Required events. */
ARMV8_PMUV3_PERFCTR_PMNC_SW_INCR = 0x00,
ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL = 0x03,
ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS = 0x04,
ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED = 0x10,
ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES = 0x11,
ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED = 0x12,
/* At least one of the following is required. */
ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED = 0x08,
ARMV8_PMUV3_PERFCTR_OP_SPEC = 0x1B,
/* Common architectural events. */
ARMV8_PMUV3_PERFCTR_MEM_READ = 0x06,
ARMV8_PMUV3_PERFCTR_MEM_WRITE = 0x07,
ARMV8_PMUV3_PERFCTR_EXC_TAKEN = 0x09,
ARMV8_PMUV3_PERFCTR_EXC_EXECUTED = 0x0A,
ARMV8_PMUV3_PERFCTR_CID_WRITE = 0x0B,
ARMV8_PMUV3_PERFCTR_PC_WRITE = 0x0C,
ARMV8_PMUV3_PERFCTR_PC_IMM_BRANCH = 0x0D,
ARMV8_PMUV3_PERFCTR_PC_PROC_RETURN = 0x0E,
ARMV8_PMUV3_PERFCTR_MEM_UNALIGNED_ACCESS = 0x0F,
ARMV8_PMUV3_PERFCTR_TTBR_WRITE = 0x1C,
/* Common microarchitectural events. */
ARMV8_PMUV3_PERFCTR_L1_ICACHE_REFILL = 0x01,
ARMV8_PMUV3_PERFCTR_ITLB_REFILL = 0x02,
ARMV8_PMUV3_PERFCTR_DTLB_REFILL = 0x05,
ARMV8_PMUV3_PERFCTR_MEM_ACCESS = 0x13,
ARMV8_PMUV3_PERFCTR_L1_ICACHE_ACCESS = 0x14,
ARMV8_PMUV3_PERFCTR_L1_DCACHE_WB = 0x15,
ARMV8_PMUV3_PERFCTR_L2_CACHE_ACCESS = 0x16,
ARMV8_PMUV3_PERFCTR_L2_CACHE_REFILL = 0x17,
ARMV8_PMUV3_PERFCTR_L2_CACHE_WB = 0x18,
ARMV8_PMUV3_PERFCTR_BUS_ACCESS = 0x19,
ARMV8_PMUV3_PERFCTR_MEM_ERROR = 0x1A,
ARMV8_PMUV3_PERFCTR_BUS_CYCLES = 0x1D,
};
/* @ref.impl arch/arm64/kernel/perf_event.c */
#define HW_OP_UNSUPPORTED 0xFFFF
/* @ref.impl arch/arm64/kernel/perf_event.c */
#define C(_x) \
PERF_COUNT_HW_CACHE_##_x
/* @ref.impl arch/arm64/kernel/perf_event.c */
#define CACHE_OP_UNSUPPORTED 0xFFFF
/*
* @ref.impl arch/arm64/kernel/perf_event.c
* PMUv3 HW events mapping.
*/
static const unsigned armv8_pmuv3_perf_map[PERF_COUNT_HW_MAX] = {
[PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES,
[PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED,
[PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
[PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = HW_OP_UNSUPPORTED,
[PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
[PERF_COUNT_HW_BUS_CYCLES] = HW_OP_UNSUPPORTED,
[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = HW_OP_UNSUPPORTED,
[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = HW_OP_UNSUPPORTED,
[PERF_COUNT_HW_REF_CPU_CYCLES] = HW_OP_UNSUPPORTED, /* TODO[PMU]: PERF_COUNT_HW_REF_CPU_CYCLESはCentOSに無かったので確認.*/
};
/* @ref.impl arch/arm64/kernel/perf_event.c */
static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
[C(L1D)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
[C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
[C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(L1I)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(LL)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(DTLB)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(ITLB)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(BPU)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
[C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
[C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
[C(NODE)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
},
},
};
/* @ref.impl arch/arm64/kernel/perf_event.c */
static int
armpmu_map_cache_event(const unsigned (*cache_map)
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX],
uint64_t config)
{
unsigned int cache_type, cache_op, cache_result, ret;
cache_type = (config >> 0) & 0xff;
if (cache_type >= PERF_COUNT_HW_CACHE_MAX)
return -EINVAL;
cache_op = (config >> 8) & 0xff;
if (cache_op >= PERF_COUNT_HW_CACHE_OP_MAX)
return -EINVAL;
cache_result = (config >> 16) & 0xff;
if (cache_result >= PERF_COUNT_HW_CACHE_RESULT_MAX)
return -EINVAL;
ret = (int)(*cache_map)[cache_type][cache_op][cache_result];
if (ret == CACHE_OP_UNSUPPORTED)
return -ENOENT;
return ret;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static int
armpmu_map_event(const unsigned (*event_map)[PERF_COUNT_HW_MAX], uint64_t config)
{
int mapping;
if (config >= PERF_COUNT_HW_MAX)
return -EINVAL;
mapping = (*event_map)[config];
return mapping == HW_OP_UNSUPPORTED ? -ENOENT : mapping;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static int
armpmu_map_raw_event(uint32_t raw_event_mask, uint64_t config)
{
return (int)(config & raw_event_mask);
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static int map_cpu_event(uint32_t type, uint64_t config,
const unsigned (*event_map)[PERF_COUNT_HW_MAX],
const unsigned (*cache_map)
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX],
uint32_t raw_event_mask)
{
switch (type) {
case PERF_TYPE_HARDWARE:
return armpmu_map_event(event_map, config);
case PERF_TYPE_HW_CACHE:
return armpmu_map_cache_event(cache_map, config);
case PERF_TYPE_RAW:
return armpmu_map_raw_event(raw_event_mask, config);
}
return -ENOENT;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline int armv8pmu_counter_valid(int idx)
{
return idx >= ARMV8_IDX_CYCLE_COUNTER && idx <= ARMV8_IDX_COUNTER_LAST;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline uint32_t armv8pmu_getreset_flags(void)
{
uint32_t value;
/* Read */
asm volatile("mrs %0, pmovsclr_el0" : "=r" (value));
/* Write to clear flags */
value &= ARMV8_OVSR_MASK;
asm volatile("msr pmovsclr_el0, %0" :: "r" (value));
return value;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline int armv8pmu_has_overflowed(uint32_t pmovsr)
{
return pmovsr & ARMV8_OVERFLOWED_MASK;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline int armv8pmu_counter_has_overflowed(uint32_t pmnc, int idx)
{
int ret = 0;
uint32_t counter;
if (!armv8pmu_counter_valid(idx)) {
ekprintf("CPU%u checking wrong counter %d overflow status\n",
ihk_mc_get_processor_id(), idx);
} else {
counter = ARMV8_IDX_TO_COUNTER(idx);
ret = pmnc & BIT(counter);
}
return ret;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static int armv8_pmuv3_map_event(uint32_t type, uint64_t config)
{
return map_cpu_event(type, config, &armv8_pmuv3_perf_map,
&armv8_pmuv3_perf_cache_map,
ARMV8_EVTYPE_EVENT);
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline uint32_t armv8pmu_pmcr_read(void)
{
uint32_t val;
asm volatile("mrs %0, pmcr_el0" : "=r" (val));
return val;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline void armv8pmu_pmcr_write(uint32_t val)
{
val &= ARMV8_PMCR_MASK;
isb();
asm volatile("msr pmcr_el0, %0" :: "r" (val));
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline int armv8pmu_select_counter(int idx)
{
uint32_t counter;
if (!armv8pmu_counter_valid(idx)) {
ekprintf("CPU%u selecting wrong PMNC counter %d\n",
ihk_mc_get_processor_id(), idx);
return -EINVAL;
}
counter = ARMV8_IDX_TO_COUNTER(idx);
asm volatile("msr pmselr_el0, %0" :: "r" (counter));
isb();
return idx;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline uint32_t armv8pmu_read_counter(int idx)
{
uint32_t value = 0;
if (!armv8pmu_counter_valid(idx))
ekprintf("CPU%u reading wrong counter %d\n",
ihk_mc_get_processor_id(), idx);
else if (idx == ARMV8_IDX_CYCLE_COUNTER)
asm volatile("mrs %0, pmccntr_el0" : "=r" (value));
else if (armv8pmu_select_counter(idx) == idx)
asm volatile("mrs %0, pmxevcntr_el0" : "=r" (value));
return value;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline void armv8pmu_write_counter(int idx, uint32_t value)
{
if (!armv8pmu_counter_valid(idx))
ekprintf("CPU%u writing wrong counter %d\n",
ihk_mc_get_processor_id(), idx);
else if (idx == ARMV8_IDX_CYCLE_COUNTER)
asm volatile("msr pmccntr_el0, %0" :: "r" (value));
else if (armv8pmu_select_counter(idx) == idx)
asm volatile("msr pmxevcntr_el0, %0" :: "r" (value));
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline int armv8pmu_enable_intens(int idx)
{
uint32_t counter;
if (!armv8pmu_counter_valid(idx)) {
ekprintf("CPU%u enabling wrong PMNC counter IRQ enable %d\n",
ihk_mc_get_processor_id(), idx);
return -EINVAL;
}
counter = ARMV8_IDX_TO_COUNTER(idx);
asm volatile("msr pmintenset_el1, %0" :: "r" (BIT(counter)));
return idx;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline int armv8pmu_disable_intens(int idx)
{
uint32_t counter;
if (!armv8pmu_counter_valid(idx)) {
ekprintf("CPU%u disabling wrong PMNC counter IRQ enable %d\n",
ihk_mc_get_processor_id(), idx);
return -EINVAL;
}
counter = ARMV8_IDX_TO_COUNTER(idx);
asm volatile("msr pmintenclr_el1, %0" :: "r" (BIT(counter)));
isb();
/* Clear the overflow flag in case an interrupt is pending. */
asm volatile("msr pmovsclr_el0, %0" :: "r" (BIT(counter)));
isb();
return idx;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static int armv8pmu_set_event_filter(unsigned long* config_base, int mode)
{
if (!(mode & PERFCTR_USER_MODE)) {
*config_base |= ARMV8_EXCLUDE_EL0;
}
if (!(mode & PERFCTR_KERNEL_MODE)) {
*config_base |= ARMV8_EXCLUDE_EL1;
}
if (0) {
/* 共通部がexclude_hvを無視してくるので常にexcludeとする。 */
*config_base |= ARMV8_INCLUDE_EL2;
}
return 0;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline void armv8pmu_write_evtype(int idx, uint32_t val)
{
if (armv8pmu_select_counter(idx) == idx) {
val &= ARMV8_EVTYPE_MASK;
asm volatile("msr pmxevtyper_el0, %0" :: "r" (val));
}
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline int armv8pmu_enable_counter(int idx)
{
uint32_t counter;
if (!armv8pmu_counter_valid(idx)) {
ekprintf("CPU%u enabling wrong PMNC counter %d\n",
ihk_mc_get_processor_id(), idx);
return -EINVAL;
}
counter = ARMV8_IDX_TO_COUNTER(idx);
asm volatile("msr pmcntenset_el0, %0" :: "r" (BIT(counter)));
return idx;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static inline int armv8pmu_disable_counter(int idx)
{
uint32_t counter;
if (!armv8pmu_counter_valid(idx)) {
ekprintf("CPU%u disabling wrong PMNC counter %d\n",
ihk_mc_get_processor_id(), idx);
return -EINVAL;
}
counter = ARMV8_IDX_TO_COUNTER(idx);
asm volatile("msr pmcntenclr_el0, %0" :: "r" (BIT(counter)));
return idx;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static int armv8pmu_start(void)
{
/* Enable user-mode access to counters. */
asm volatile("msr pmuserenr_el0, %0" :: "r"(1));
/* Enable all counters */
armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMCR_E);
return 0;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static void armv8pmu_stop(void)
{
/* Disable all counters */
armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMCR_E);
/* Disable user-mode access to counters. */
asm volatile("msr pmuserenr_el0, %0" :: "r" (0));
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static void armv8pmu_disable_event(int idx)
{
/*
* Disable counter
*/
armv8pmu_disable_counter(idx);
/*
* Disable interrupt for this counter
*/
armv8pmu_disable_intens(idx);
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static void armv8pmu_reset(void* info)
{
struct arm_pmu* cpu_pmu = (struct arm_pmu*)info;
uint32_t idx, nb_cnt = cpu_pmu->num_events;
/* The counter and interrupt enable registers are unknown at reset. */
for (idx = ARMV8_IDX_CYCLE_COUNTER; idx < nb_cnt; ++idx)
armv8pmu_disable_event(idx);
/* Initialize & Reset PMNC: C and P bits. */
armv8pmu_pmcr_write(ARMV8_PMCR_P | ARMV8_PMCR_C);
/* Disable access from userspace. */
asm volatile("msr pmuserenr_el0, %0" :: "r" (0));
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static int armv8pmu_get_event_idx(int num_events, unsigned long used_mask)
{
int idx;
for (idx = ARMV8_IDX_COUNTER0; idx < num_events; ++idx) {
if (!(used_mask & (1UL << idx))) {
return idx;
}
}
/* The counters are all in use. */
return -EAGAIN;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static uint32_t armv8pmu_read_num_pmnc_events(void)
{
uint32_t nb_cnt;
/* Read the nb of CNTx counters supported from PMNC */
nb_cnt = (armv8pmu_pmcr_read() >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
/* Add the CPU cycles counter and return */
return nb_cnt + 1;
}
/* @ref.impl arch/arm64/kernel/perf_event.c */
static void armv8pmu_handle_irq(void *priv)
{
uint32_t pmovsr;
/*
* Get and reset the IRQ flags
*/
pmovsr = armv8pmu_getreset_flags();
/*
* Did an overflow occur?
*/
if (!armv8pmu_has_overflowed(pmovsr))
return;
/*
* TODO[PMU]: Handle the counter(s) overflow(s)
*/
}
static struct ihk_mc_interrupt_handler armv8pmu_handler = {
.func = armv8pmu_handle_irq,
.priv = NULL,
};
int armv8pmu_init(struct arm_pmu* cpu_pmu)
{
cpu_pmu->num_events = armv8pmu_read_num_pmnc_events();
cpu_pmu->read_counter = armv8pmu_read_counter;
cpu_pmu->write_counter = armv8pmu_write_counter;
cpu_pmu->set_event_filter = armv8pmu_set_event_filter;
cpu_pmu->write_evtype = armv8pmu_write_evtype;
cpu_pmu->enable_intens = armv8pmu_enable_intens;
cpu_pmu->disable_intens = armv8pmu_disable_intens;
cpu_pmu->enable_counter = armv8pmu_enable_counter;
cpu_pmu->disable_counter = armv8pmu_disable_counter;
cpu_pmu->enable_pmu = armv8pmu_start;
cpu_pmu->disable_pmu = armv8pmu_stop;
cpu_pmu->get_event_idx = armv8pmu_get_event_idx;
cpu_pmu->map_event = armv8_pmuv3_map_event;
cpu_pmu->handler = &armv8pmu_handler;
return 0;
}

View File

@ -0,0 +1,311 @@
/* postk_print_sysreg.c COPYRIGHT FUJITSU LIMITED 2016 */
/*
* usage:
* (gdb) call/x postk_debug_sysreg_ttbr1_el1()
* $1 = 0x4e64f000
*/
#define postk_debug_sysreg(sysreg) __postk_debug_sysreg(sysreg, sysreg)
#define __postk_debug_sysreg(fname, regname) \
unsigned long postk_debug_sysreg_ ## fname (void) \
{ \
unsigned long sysreg; \
asm volatile( \
"mrs %0, " # regname "\n" \
: "=r" (sysreg) \
: \
: "memory"); \
return sysreg; \
}
/*
* ARMR Architecture Reference Manual ARMv8, for ARMv8-A architecture profile Errata markup Beta
* - Table J-5 Alphabetical index of AArch64 Registers
*/
postk_debug_sysreg(actlr_el1)
postk_debug_sysreg(actlr_el2)
postk_debug_sysreg(actlr_el3)
postk_debug_sysreg(afsr0_el1)
postk_debug_sysreg(afsr0_el2)
postk_debug_sysreg(afsr0_el3)
postk_debug_sysreg(afsr1_el1)
postk_debug_sysreg(afsr1_el2)
postk_debug_sysreg(afsr1_el3)
postk_debug_sysreg(aidr_el1)
postk_debug_sysreg(amair_el1)
postk_debug_sysreg(amair_el2)
postk_debug_sysreg(amair_el3)
/*postk_debug_sysreg(at s12e0r)*/
/*postk_debug_sysreg(at s12e0w)*/
/*postk_debug_sysreg(at s12e1r)*/
/*postk_debug_sysreg(at s12e1w)*/
/*postk_debug_sysreg(at s1e0r)*/
/*postk_debug_sysreg(at s1e0w)*/
/*postk_debug_sysreg(at s1e1r)*/
/*postk_debug_sysreg(at s1e1w)*/
/*postk_debug_sysreg(at s1e2r)*/
/*postk_debug_sysreg(at s1e2w)*/
/*postk_debug_sysreg(at s1e3r)*/
/*postk_debug_sysreg(at s1e3w)*/
postk_debug_sysreg(ccsidr_el1)
postk_debug_sysreg(clidr_el1)
postk_debug_sysreg(cntfrq_el0)
postk_debug_sysreg(cnthctl_el2)
postk_debug_sysreg(cnthp_ctl_el2)
postk_debug_sysreg(cnthp_cval_el2)
postk_debug_sysreg(cnthp_tval_el2)
postk_debug_sysreg(cntkctl_el1)
postk_debug_sysreg(cntp_ctl_el0)
postk_debug_sysreg(cntp_cval_el0)
postk_debug_sysreg(cntp_tval_el0)
postk_debug_sysreg(cntpct_el0)
postk_debug_sysreg(cntps_ctl_el1)
postk_debug_sysreg(cntps_cval_el1)
postk_debug_sysreg(cntps_tval_el1)
postk_debug_sysreg(cntv_ctl_el0)
postk_debug_sysreg(cntv_cval_el0)
postk_debug_sysreg(cntv_tval_el0)
postk_debug_sysreg(cntvct_el0)
postk_debug_sysreg(cntvoff_el2)
postk_debug_sysreg(contextidr_el1)
postk_debug_sysreg(cpacr_el1)
postk_debug_sysreg(cptr_el2)
postk_debug_sysreg(cptr_el3)
postk_debug_sysreg(csselr_el1)
postk_debug_sysreg(ctr_el0)
postk_debug_sysreg(currentel)
postk_debug_sysreg(dacr32_el2)
postk_debug_sysreg(daif)
postk_debug_sysreg(dbgauthstatus_el1)
/*postk_debug_sysreg(dbgbcr<n>_el1)*/
/*postk_debug_sysreg(dbgbvr<n>_el1)*/
postk_debug_sysreg(dbgclaimclr_el1)
postk_debug_sysreg(dbgclaimset_el1)
postk_debug_sysreg(dbgdtr_el0)
postk_debug_sysreg(dbgdtrrx_el0)
postk_debug_sysreg(dbgdtrtx_el0)
postk_debug_sysreg(dbgprcr_el1)
postk_debug_sysreg(dbgvcr32_el2)
/*postk_debug_sysreg(dbgwcr<n>_el1)*/
/*postk_debug_sysreg(dbgwvr<n>_el1)*/
/*postk_debug_sysreg(dc cisw)*/
/*postk_debug_sysreg(dc civac)*/
/*postk_debug_sysreg(dc csw)*/
/*postk_debug_sysreg(dc cvac)*/
/*postk_debug_sysreg(dc cvau)*/
/*postk_debug_sysreg(dc isw)*/
/*postk_debug_sysreg(dc ivac)*/
/*postk_debug_sysreg(dc zva)*/
postk_debug_sysreg(dczid_el0)
postk_debug_sysreg(dlr_el0)
postk_debug_sysreg(dspsr_el0)
postk_debug_sysreg(elr_el1)
postk_debug_sysreg(elr_el2)
postk_debug_sysreg(elr_el3)
postk_debug_sysreg(esr_el1)
postk_debug_sysreg(esr_el2)
postk_debug_sysreg(esr_el3)
postk_debug_sysreg(far_el1)
postk_debug_sysreg(far_el2)
postk_debug_sysreg(far_el3)
postk_debug_sysreg(fpcr)
postk_debug_sysreg(fpexc32_el2)
postk_debug_sysreg(fpsr)
postk_debug_sysreg(hacr_el2)
postk_debug_sysreg(hcr_el2)
postk_debug_sysreg(hpfar_el2)
postk_debug_sysreg(hstr_el2)
/*postk_debug_sysreg(ic iallu)*/
/*postk_debug_sysreg(ic ialluis)*/
/*postk_debug_sysreg(ic ivau)*/
/*postk_debug_sysreg(icc_ap0r0_el1)*/
/*postk_debug_sysreg(icc_ap0r1_el1)*/
/*postk_debug_sysreg(icc_ap0r2_el1)*/
/*postk_debug_sysreg(icc_ap0r3_el1)*/
/*postk_debug_sysreg(icc_ap1r0_el1)*/
/*postk_debug_sysreg(icc_ap1r1_el1)*/
/*postk_debug_sysreg(icc_ap1r2_el1)*/
/*postk_debug_sysreg(icc_ap1r3_el1)*/
/*postk_debug_sysreg(icc_asgi1r_el1)*/
/*postk_debug_sysreg(icc_bpr0_el1)*/
/*postk_debug_sysreg(icc_bpr1_el1)*/
/*postk_debug_sysreg(icc_ctlr_el1)*/
/*postk_debug_sysreg(icc_ctlr_el3)*/
/*postk_debug_sysreg(icc_dir_el1)*/
/*postk_debug_sysreg(icc_eoir0_el1)*/
/*postk_debug_sysreg(icc_eoir1_el1)*/
/*postk_debug_sysreg(icc_hppir0_el1)*/
/*postk_debug_sysreg(icc_hppir1_el1)*/
/*postk_debug_sysreg(icc_iar0_el1)*/
/*postk_debug_sysreg(icc_iar1_el1)*/
/*postk_debug_sysreg(icc_igrpen0_el1)*/
/*postk_debug_sysreg(icc_igrpen1_el1)*/
/*postk_debug_sysreg(icc_igrpen1_el3)*/
/*postk_debug_sysreg(icc_pmr_el1)*/
/*postk_debug_sysreg(icc_rpr_el1)*/
/*postk_debug_sysreg(icc_seien_el1)*/
/*postk_debug_sysreg(icc_sgi0r_el1)*/
/*postk_debug_sysreg(icc_sgi1r_el1)*/
/*postk_debug_sysreg(icc_sre_el1)*/
/*postk_debug_sysreg(icc_sre_el2)*/
/*postk_debug_sysreg(icc_sre_el3)*/
/*postk_debug_sysreg(ich_ap0r0_el2)*/
/*postk_debug_sysreg(ich_ap0r1_el2)*/
/*postk_debug_sysreg(ich_ap0r2_el2)*/
/*postk_debug_sysreg(ich_ap0r3_el2)*/
/*postk_debug_sysreg(ich_ap1r0_el2)*/
/*postk_debug_sysreg(ich_ap1r1_el2)*/
/*postk_debug_sysreg(ich_ap1r2_el2)*/
/*postk_debug_sysreg(ich_ap1r3_el2)*/
/*postk_debug_sysreg(ich_eisr_el2)*/
/*postk_debug_sysreg(ich_elsr_el2)*/
/*postk_debug_sysreg(ich_hcr_el2)*/
/*postk_debug_sysreg(ich_lr<n>_el2)*/
/*postk_debug_sysreg(ich_misr_el2)*/
/*postk_debug_sysreg(ich_vmcr_el2)*/
/*postk_debug_sysreg(ich_vseir_el2)*/
/*postk_debug_sysreg(ich_vtr_el2)*/
postk_debug_sysreg(id_aa64afr0_el1)
postk_debug_sysreg(id_aa64afr1_el1)
postk_debug_sysreg(id_aa64dfr0_el1)
postk_debug_sysreg(id_aa64dfr1_el1)
postk_debug_sysreg(id_aa64isar0_el1)
postk_debug_sysreg(id_aa64isar1_el1)
postk_debug_sysreg(id_aa64mmfr0_el1)
postk_debug_sysreg(id_aa64mmfr1_el1)
postk_debug_sysreg(id_aa64pfr0_el1)
postk_debug_sysreg(id_aa64pfr1_el1)
postk_debug_sysreg(id_afr0_el1)
postk_debug_sysreg(id_dfr0_el1)
postk_debug_sysreg(id_isar0_el1)
postk_debug_sysreg(id_isar1_el1)
postk_debug_sysreg(id_isar2_el1)
postk_debug_sysreg(id_isar3_el1)
postk_debug_sysreg(id_isar4_el1)
postk_debug_sysreg(id_isar5_el1)
postk_debug_sysreg(id_mmfr0_el1)
postk_debug_sysreg(id_mmfr1_el1)
postk_debug_sysreg(id_mmfr2_el1)
postk_debug_sysreg(id_mmfr3_el1)
postk_debug_sysreg(id_pfr0_el1)
postk_debug_sysreg(id_pfr1_el1)
postk_debug_sysreg(ifsr32_el2)
postk_debug_sysreg(isr_el1)
postk_debug_sysreg(mair_el1)
postk_debug_sysreg(mair_el2)
postk_debug_sysreg(mair_el3)
postk_debug_sysreg(mdccint_el1)
postk_debug_sysreg(mdccsr_el0)
postk_debug_sysreg(mdcr_el2)
postk_debug_sysreg(mdcr_el3)
postk_debug_sysreg(mdrar_el1)
postk_debug_sysreg(mdscr_el1)
postk_debug_sysreg(midr_el1)
postk_debug_sysreg(mpidr_el1)
postk_debug_sysreg(mvfr0_el1)
postk_debug_sysreg(mvfr1_el1)
postk_debug_sysreg(mvfr2_el1)
postk_debug_sysreg(nzcv)
postk_debug_sysreg(osdlr_el1)
postk_debug_sysreg(osdtrrx_el1)
postk_debug_sysreg(osdtrtx_el1)
postk_debug_sysreg(oseccr_el1)
postk_debug_sysreg(oslar_el1)
postk_debug_sysreg(oslsr_el1)
postk_debug_sysreg(par_el1)
postk_debug_sysreg(pmccfiltr_el0)
postk_debug_sysreg(pmccntr_el0)
postk_debug_sysreg(pmceid0_el0)
postk_debug_sysreg(pmceid1_el0)
postk_debug_sysreg(pmcntenclr_el0)
postk_debug_sysreg(pmcntenset_el0)
postk_debug_sysreg(pmcr_el0)
/*postk_debug_sysreg(pmevcntr<n>_el0)*/
/*postk_debug_sysreg(pmevtyper<n>_el0)*/
postk_debug_sysreg(pmintenclr_el1)
postk_debug_sysreg(pmintenset_el1)
postk_debug_sysreg(pmovsclr_el0)
postk_debug_sysreg(pmovsset_el0)
postk_debug_sysreg(pmselr_el0)
postk_debug_sysreg(pmswinc_el0)
postk_debug_sysreg(pmuserenr_el0)
postk_debug_sysreg(pmxevcntr_el0)
postk_debug_sysreg(pmxevtyper_el0)
postk_debug_sysreg(revidr_el1)
postk_debug_sysreg(rmr_el1)
postk_debug_sysreg(rmr_el2)
postk_debug_sysreg(rmr_el3)
postk_debug_sysreg(rvbar_el1)
postk_debug_sysreg(rvbar_el2)
postk_debug_sysreg(rvbar_el3)
/*postk_debug_sysreg(s3_<op1>_<cn>_<cm>_<op2>)*/
postk_debug_sysreg(scr_el3)
postk_debug_sysreg(sctlr_el1)
postk_debug_sysreg(sctlr_el2)
postk_debug_sysreg(sctlr_el3)
postk_debug_sysreg(sder32_el3)
postk_debug_sysreg(sp_el0)
postk_debug_sysreg(sp_el1)
postk_debug_sysreg(sp_el2)
/*postk_debug_sysreg(sp_el3)*/
postk_debug_sysreg(spsel)
postk_debug_sysreg(spsr_abt)
postk_debug_sysreg(spsr_el1)
postk_debug_sysreg(spsr_el2)
postk_debug_sysreg(spsr_el3)
postk_debug_sysreg(spsr_fiq)
postk_debug_sysreg(spsr_irq)
postk_debug_sysreg(spsr_und)
postk_debug_sysreg(tcr_el1)
postk_debug_sysreg(tcr_el2)
postk_debug_sysreg(tcr_el3)
postk_debug_sysreg(teecr32_el1)
postk_debug_sysreg(teehbr32_el1)
/*postk_debug_sysreg(tlbi alle1)*/
/*postk_debug_sysreg(tlbi alle1is)*/
/*postk_debug_sysreg(tlbi alle2)*/
/*postk_debug_sysreg(tlbi alle2is)*/
/*postk_debug_sysreg(tlbi alle3)*/
/*postk_debug_sysreg(tlbi alle3is)*/
/*postk_debug_sysreg(tlbi aside1)*/
/*postk_debug_sysreg(tlbi aside1is)*/
/*postk_debug_sysreg(tlbi ipas2e1)*/
/*postk_debug_sysreg(tlbi ipas2e1is)*/
/*postk_debug_sysreg(tlbi ipas2le1)*/
/*postk_debug_sysreg(tlbi ipas2le1is)*/
/*postk_debug_sysreg(tlbi vaae1)*/
/*postk_debug_sysreg(tlbi vaae1is)*/
/*postk_debug_sysreg(tlbi vaale1)*/
/*postk_debug_sysreg(tlbi vaale1is)*/
/*postk_debug_sysreg(tlbi vae1)*/
/*postk_debug_sysreg(tlbi vae1is)*/
/*postk_debug_sysreg(tlbi vae2)*/
/*postk_debug_sysreg(tlbi vae2is)*/
/*postk_debug_sysreg(tlbi vae3)*/
/*postk_debug_sysreg(tlbi vae3is)*/
/*postk_debug_sysreg(tlbi vale1)*/
/*postk_debug_sysreg(tlbi vale1is)*/
/*postk_debug_sysreg(tlbi vale2)*/
/*postk_debug_sysreg(tlbi vale2is)*/
/*postk_debug_sysreg(tlbi vale3)*/
/*postk_debug_sysreg(tlbi vale3is)*/
/*postk_debug_sysreg(tlbi vmalle1)*/
/*postk_debug_sysreg(tlbi vmalle1is)*/
/*postk_debug_sysreg(tlbi vmalls12e1)*/
/*postk_debug_sysreg(tlbi vmalls12e1is)*/
postk_debug_sysreg(tpidr_el0)
postk_debug_sysreg(tpidr_el1)
postk_debug_sysreg(tpidr_el2)
postk_debug_sysreg(tpidr_el3)
postk_debug_sysreg(tpidrro_el0)
postk_debug_sysreg(ttbr0_el1)
postk_debug_sysreg(ttbr0_el2)
postk_debug_sysreg(ttbr0_el3)
postk_debug_sysreg(ttbr1_el1)
postk_debug_sysreg(vbar_el1)
postk_debug_sysreg(vbar_el2)
postk_debug_sysreg(vbar_el3)
postk_debug_sysreg(vmpidr_el2)
postk_debug_sysreg(vpidr_el2)
postk_debug_sysreg(vtcr_el2)
postk_debug_sysreg(vttbr_el2)

View File

@ -0,0 +1,13 @@
/* proc-macros.S COPYRIGHT FUJITSU LIMITED 2015 */
#include <arch-memory.h>
/*
* dcache_line_size - get the minimum D-cache line size from the CTR register.
*/
.macro dcache_line_size, reg, tmp
mrs \tmp, ctr_el0 // read CTR
ubfm \tmp, \tmp, #16, #19 // cache line size encoding
mov \reg, #4 // bytes per word
lsl \reg, \reg, \tmp // actual cache line size
.endm

148
arch/arm64/kernel/proc.S Normal file
View File

@ -0,0 +1,148 @@
/* proc.S COPYRIGHT FUJITSU LIMITED 2015-2017 */
#include <linkage.h>
#include <arch-memory.h>
#include <sysreg.h>
#include <assembler.h>
#include "proc-macros.S"
#ifdef CONFIG_ARM64_64K_PAGES
# define TCR_TG_FLAGS TCR_TG0_64K | TCR_TG1_64K
#else
# define TCR_TG_FLAGS TCR_TG0_4K | TCR_TG1_4K
#endif
//#ifdef CONFIG_SMP
#define TCR_SMP_FLAGS TCR_SHARED
//#else
//#define TCR_SMP_FLAGS 0
//#endif
/* PTWs cacheable, inner/outer WBWA */
#define TCR_CACHE_FLAGS TCR_IRGN_WBWA | TCR_ORGN_WBWA
#define MAIR(attr, mt) ((attr) << ((mt) * 8))
/*
* cpu_do_idle()
*
* Idle the processor (wait for interrupt).
*/
#if defined(CONFIG_HAS_NMI)
#include <arm-gic-v3.h>
ENTRY(cpu_do_idle)
mrs x0, daif // save I bit
msr daifset, #2 // set I bit
mrs_s x1, ICC_PMR_EL1 // save PMR
mov x2, #ICC_PMR_EL1_UNMASKED
msr_s ICC_PMR_EL1, x2 // unmask at PMR
dsb sy // WFI may enter a low-power mode
wfi
msr_s ICC_PMR_EL1, x1 // restore PMR
msr daif, x0 // restore I bit
ret
ENDPROC(cpu_do_idle)
#else /* defined(CONFIG_HAS_NMI) */
ENTRY(cpu_do_idle)
dsb sy // WFI may enter a low-power mode
wfi
ret
ENDPROC(cpu_do_idle)
#endif /* defined(CONFIG_HAS_NMI) */
/*
* cpu_do_switch_mm(pgd_phys, tsk)
*
* Set the translation table base pointer to be pgd_phys.
*
* - pgd_phys - physical address of new TTB
*/
ENTRY(cpu_do_switch_mm)
//mmid w1, x1 // get mm->context.id
bfi x0, x1, #48, #16 // set the ASID
msr ttbr0_el1, x0 // set TTBR0
isb
ret
ENDPROC(cpu_do_switch_mm)
.section ".text.init", #alloc, #execinstr
/*
* __cpu_setup
*
* Initialise the processor for turning the MMU on. Return in x0 the
* value of the SCTLR_EL1 register.
*/
ENTRY(__cpu_setup)
tlbi vmalle1 // Invalidate local TLB
dsb nsh
mov x0, #3 << 20
/* SVE */
mrs x5, id_aa64pfr0_el1
ubfx x5, x5, #ID_AA64PFR0_SVE_SHIFT, #4
cbz x5, 1f
orr x0, x0, #CPACR_EL1_ZEN // SVE: trap disabled EL1 and EL0
1: msr cpacr_el1, x0 // Enable FP/ASIMD
mov x0, #1 << 12 // Reset mdscr_el1 and disable
msr mdscr_el1, x0 // access to the DCC from EL0
isb // Unmask debug exceptions now,
enable_dbg // since this is per-cpu
/*
* Memory region attributes for LPAE:
*
* n = AttrIndx[2:0]
* n MAIR
* DEVICE_nGnRnE 000 00000000
* DEVICE_nGnRE 001 00000100
* DEVICE_GRE 010 00001100
* NORMAL_NC 011 01000100
* NORMAL 100 11111111
*/
ldr x5, =MAIR(0x00, MT_DEVICE_nGnRnE) | \
MAIR(0x04, MT_DEVICE_nGnRE) | \
MAIR(0x0c, MT_DEVICE_GRE) | \
MAIR(0x44, MT_NORMAL_NC) | \
MAIR(0xff, MT_NORMAL)
msr mair_el1, x5
/*
* Prepare SCTLR
*/
adr x5, crval
ldp w5, w6, [x5]
mrs x0, sctlr_el1
bic x0, x0, x5 // clear bits
orr x0, x0, x6 // set bits
/*
* Set/prepare TCR and TTBR. We use 512GB (39-bit) address range for
* both user and kernel.
*/
ldr x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0
/*
* Read the PARange bits from ID_AA64MMFR0_EL1 and set the IPS bits in
* TCR_EL1.
*/
mrs x9, ID_AA64MMFR0_EL1
bfi x10, x9, #32, #3
msr tcr_el1, x10
ret // return to head.S
ENDPROC(__cpu_setup)
/*
* n n T
* U E WT T UD US IHBS
* CE0 XWHW CZ ME TEEA S
* .... .IEE .... NEAI TE.I ..AD DEN0 ACAM
* 0011 0... 1101 ..0. ..0. 10.. .... .... < hardware reserved
* .... .1.. .... 01.1 11.1 ..01 0001 1101 < software settings
*/
.type crval, #object
crval:
.word 0x000802e2 // clear
.word 0x0405d11d // set

155
arch/arm64/kernel/psci.c Normal file
View File

@ -0,0 +1,155 @@
/* psci.c COPYRIGHT FUJITSU LIMITED 2015-2016 */
/* @ref.impl arch/arm64/kernel/psci.c */
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* Copyright (C) 2013 ARM Limited
*
* Author: Will Deacon <will.deacon@arm.com>
*/
#include <psci.h>
#include <errno.h>
#include <ihk/types.h>
#include <ihk/debug.h>
#include <compiler.h>
#include <lwk/compiler.h>
//#define DEBUG_PRINT_PSCI
#ifdef DEBUG_PRINT_PSCI
#define dkprintf(...) kprintf(__VA_ARGS__)
#define ekprintf(...) kprintf(__VA_ARGS__)
#else
#define dkprintf(...) do { if (0) kprintf(__VA_ARGS__); } while (0)
#define ekprintf(...) kprintf(__VA_ARGS__)
#endif
#define PSCI_POWER_STATE_TYPE_POWER_DOWN 1
extern uint64_t ihk_param_cpu_logical_map;
static uint64_t *__cpu_logical_map = &ihk_param_cpu_logical_map;
#define cpu_logical_map(cpu) __cpu_logical_map[cpu]
struct psci_power_state {
uint16_t id;
uint8_t type;
uint8_t affinity_level;
};
static int psci_to_linux_errno(int errno)
{
switch (errno) {
case PSCI_RET_SUCCESS:
return 0;
case PSCI_RET_NOT_SUPPORTED:
return -EOPNOTSUPP;
case PSCI_RET_INVALID_PARAMS:
return -EINVAL;
case PSCI_RET_DENIED:
return -EPERM;
};
return -EINVAL;
}
static uint32_t psci_power_state_pack(struct psci_power_state state)
{
return ((state.id << PSCI_0_2_POWER_STATE_ID_SHIFT)
& PSCI_0_2_POWER_STATE_ID_MASK) |
((state.type << PSCI_0_2_POWER_STATE_TYPE_SHIFT)
& PSCI_0_2_POWER_STATE_TYPE_MASK) |
((state.affinity_level << PSCI_0_2_POWER_STATE_AFFL_SHIFT)
& PSCI_0_2_POWER_STATE_AFFL_MASK);
}
static noinline int __invoke_psci_fn_hvc(uint64_t function_id, uint64_t arg0, uint64_t arg1,
uint64_t arg2)
{
asm volatile(
__asmeq("%0", "x0")
__asmeq("%1", "x1")
__asmeq("%2", "x2")
__asmeq("%3", "x3")
"hvc #0\n"
: "+r" (function_id)
: "r" (arg0), "r" (arg1), "r" (arg2));
return function_id;
}
static noinline int __invoke_psci_fn_smc(uint64_t function_id, uint64_t arg0, uint64_t arg1,
uint64_t arg2)
{
asm volatile(
__asmeq("%0", "x0")
__asmeq("%1", "x1")
__asmeq("%2", "x2")
__asmeq("%3", "x3")
"smc #0\n"
: "+r" (function_id)
: "r" (arg0), "r" (arg1), "r" (arg2));
return function_id;
}
static int (*invoke_psci_fn)(uint64_t, uint64_t, uint64_t, uint64_t) = NULL;
#define PSCI_METHOD_INVALID -1
#define PSCI_METHOD_HVC 0
#define PSCI_METHOD_SMC 1
int psci_init(void)
{
extern unsigned long ihk_param_psci_method;
int ret = 0;
if (ihk_param_psci_method == PSCI_METHOD_SMC) {
dkprintf("psci_init(): set invoke_psci_fn = __invoke_psci_fn_smc\n");
invoke_psci_fn = __invoke_psci_fn_smc;
} else if (ihk_param_psci_method == PSCI_METHOD_HVC) {
dkprintf("psci_init(): set invoke_psci_fn = __invoke_psci_fn_hvc\n");
invoke_psci_fn = __invoke_psci_fn_hvc;
} else {
ekprintf("psci_init(): ihk_param_psci_method invalid value. %ld\n", ihk_param_psci_method);
ret = -1;
}
return ret;
}
int psci_cpu_off(void)
{
int err;
uint32_t fn, power_state;
struct psci_power_state state = {
.type = PSCI_POWER_STATE_TYPE_POWER_DOWN,
};
fn = PSCI_0_2_FN_CPU_OFF;
power_state = psci_power_state_pack(state);
err = invoke_psci_fn(fn, power_state, 0, 0);
return psci_to_linux_errno(err);
}
static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
{
int err;
uint32_t fn;
fn = PSCI_0_2_FN64_CPU_ON;
err = invoke_psci_fn(fn, cpuid, entry_point, 0);
return psci_to_linux_errno(err);
}
int cpu_psci_cpu_boot(unsigned int cpu, unsigned long pc)
{
return psci_cpu_on(cpu_logical_map(cpu), pc);
}

1006
arch/arm64/kernel/ptrace.c Normal file

File diff suppressed because it is too large Load Diff

22
arch/arm64/kernel/smp.c Normal file
View File

@ -0,0 +1,22 @@
/* smp.c COPYRIGHT FUJITSU LIMITED 2015 */
#include <thread_info.h>
#include <smp.h>
/*
* as from 2.5, kernels no longer have an init_tasks structure
* so we need some other way of telling a new secondary core
* where to place its SVC stack
*/
/* initialize value for BSP bootup */
/* AP bootup value setup in ihk_mc_boot_cpu() */
struct start_kernel_param;
extern void start_kernel(struct start_kernel_param *param);
extern struct start_kernel_param *ihk_param_head;
struct secondary_data secondary_data = {
.stack = (char *)&init_thread_info + THREAD_START_SP,
.next_pc = (uint64_t)start_kernel,
.arg = (unsigned long)&ihk_param_head
};

2500
arch/arm64/kernel/syscall.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,56 @@
/* trampoline.S COPYRIGHT FUJITSU LIMITED 2015 */
.section .rodata, "a", @progbits
.globl trampoline_code_data
base = .
trampoline_code_data:
.org 8
header_pgtbl:
.quad 0 /* page table address */
func_address:
.quad 0 /* load address */
arg:
.quad 0 /* next address */
stack_ptr:
.quad 0 /* initial stack */
debug:
.quad 0 /* debug area */
transit_pgtbl:
.quad 0 /* 32->64 bit table address */
cpu_start_body:
.balign 8
protect_start:
.balign 8
start_64:
boot_idtptr:
.short 0
.long 0
boot_gdtptr:
.short boot_gdt32_end - boot_gdt32
.long boot_gdt32 - base
.align 4
boot_gdt32:
.quad 0
.quad 0
.quad 0x00cf9b000000ffff
.quad 0x00cf93000000ffff
.quad 0x00af9b000000ffff
.quad 0x0000890000000067
boot_gdt32_end:
start_64_vec:
.long start_64 - base
.word 0
stack:
.org 0x1000
stack_end:
.globl trampoline_code_data_end
trampoline_code_data_end:

Some files were not shown because too many files have changed in this diff Show More