This project is mirrored from Pull mirroring updated .
  1. 16 Apr, 2020 1 commit
  2. 15 Apr, 2020 2 commits
  3. 14 Apr, 2020 2 commits
  4. 11 Apr, 2020 1 commit
  5. 08 Apr, 2020 1 commit
  6. 08 Mar, 2020 1 commit
  7. 23 Feb, 2020 1 commit
  8. 03 Feb, 2020 1 commit
  9. 17 Aug, 2019 1 commit
  10. 13 Aug, 2019 1 commit
  11. 03 Aug, 2019 1 commit
  12. 16 Jul, 2019 1 commit
  13. 15 Jul, 2019 1 commit
  14. 13 Jul, 2019 2 commits
  15. 07 Jul, 2019 1 commit
  16. 06 Jul, 2019 1 commit
  17. 25 Jun, 2019 1 commit
  18. 16 Jun, 2019 2 commits
  19. 10 Jun, 2019 1 commit
  20. 02 Jun, 2019 1 commit
  21. 04 May, 2019 2 commits
  22. 16 Jan, 2019 1 commit
  23. 26 Jul, 2018 1 commit
    • Colin Finck's avatar
      Port the Memory Manager to AArch64, with full support for 4-level Paging, 4... · a711534f
      Colin Finck authored
      Port the Memory Manager to AArch64, with full support for 4-level Paging, 4 KiB, 2 MiB, and 1 GiB Pages, and Execute-Disable!
      * Make PageTableEntryFlags architecture-independent by adding builder pattern methods.
        Instead of providing PageTableEntryFlags::EXECUTE_DISABLE, one simply uses the execute_disable() method now.
        The different implementations of each method for AArch64 and x86_64 map to the respective architecture flags.
      * Make a boolean execute_disable the only additional parameter for mm::allocate() to make it architecture-independent.
      * Remove the do_ipi parameter from paging functions.
        It was x86_64-specific and there is nothing wrong with always doing an IPI when necessary and application processors have been booted.
      * Make the LEVEL/MAP_LEVEL constant ascending instead of descending for the AArch64 implementation to match with the Page Table names (L0, L1, etc.).
  24. 24 Jul, 2018 1 commit
    • Colin Finck's avatar
      The first real AArch64 bringup commit for HermitCore-rs · a9671ff5
      Colin Finck authored
      * Split the CMake files into an architecture-independent and an architecture-dependent part.
        This overhaul of the build system also removes the custom "module system", which doesn't make much sense for a Rust kernel and doesn't work well with such a split configuration.
      * Add an aarch64-unknown-hermit-kernel.json target for Xargo.
      * Implement basic IRQ and serial port functions for AArch64 to get a first output.
      * Copy the 4-level paging from x86_64 to AArch64 and remove the parts relying on the "x86" crate.
        While this still needs some work to get the names and flags right, 4-level paging should generally work on AArch64 with the same concepts that are used for x86_64.
      * Comment out and stub out many functions for AArch64 to let is somewhat compile.
      * Redefine core_id as a CPU number that is guaranteed to be sequential to make it architecture-independent.
        For x86_64, this number is now translated to a Local APIC ID in the "apic" module only.
      * Add a per-architecture TaskStacks structure, which contains "stack" and "ist" on x86_64 and only "stack" on AArch64.
      * Add a per-architecture network_adapter_init function to initialize RTL8139 only for x86_64.
      * Get rid of the top-level "arch" directory and put the reasonable architecture-dependent include files into /include/hermit/<ARCH>, all prefixed with "arch_".
      * Make the inclusion of some crates dependent on the target architecture.
      * Rename get_number_of_processors to get_processor_count and make it return a usize.
  25. 29 Jun, 2018 1 commit
    • Colin Finck's avatar
      Overhaul the timing framework to improve the global timer resolution to 1... · aee76165
      Colin Finck authored
      Overhaul the timing framework to improve the global timer resolution to 1 microsecond and simplify the code, also for a later AArch64 port.
      * Rename the udelay() syscall to sys_usleep() for consistence and accept a u64 parameter.
        udelay() with an u32 parameter is still provided to serve the existing customers, but should eventually vanish.
      * Use time values in 1 microsecond granularity and u64 everywhere instead of simulating a 10ms timer with the CPU Time-Stamp Counter.
        This guarantees maximum precision for the best timing function we currently provide (sys_usleep).
        It also simplifies the code, because we can simply add microseconds to the timer tick count.
      * Rewrite update_timer_ticks() as get_timer_ticks().
        It simply divides get_timestamp() by get_frequency() now to simulate a 1 microsecond timer with the CPU Time-Stamp Counter.
        This requires no per-core variables and is much more accurate.
      * Calibrate the Local APIC Timer for 1 microsecond resolution.
        This reduces the maximum timeout to 34 seconds on a Intel Xeon E5-2650 v3 @ 2.30GHz, but for longer timeouts, the one-shot timer would simply
        fire multiple times.
      * Detect CPU support for the TSC-Deadline Mode of the Local APIC Timer and use it.
        This one is easier to program than the legacy One-Shot Mode, even more accurate, and has no maximum timeout.
  26. 06 Jun, 2018 1 commit
  27. 30 May, 2018 1 commit
    • Colin Finck's avatar
      Implement graceful shutdown. · b684516e
      Colin Finck authored
      * Call acpi::poweroff() from arch::x86_64::processor::shutdown().
      * Store the exit code of the last task that finished.
      * Add a sys_shutdown() syscall implemented per SysCallInterface.
        The generic implementation just calls arch::processor::shutdown(), whereas the Proxy and Uhyve SyscallInterfaces send SysExit
        commands over their communication channels.
        This supersedes the per-SyscallInterface sys_exit() call, which would wrongly shut down the entire system when the first task exited.
      * Add a sys_lwip_register_tcpip_task() syscall to register the Task ID of the lwIP TCP/IP thread.
        This enables us to shut down the system when only the lwIP TCP/IP thread is left.
      * Make TaskId u32 to be consistent with the Tid type of the syscalls.
  28. 13 Apr, 2018 1 commit
  29. 04 Apr, 2018 1 commit
  30. 30 Mar, 2018 1 commit
  31. 29 Mar, 2018 1 commit
  32. 22 Mar, 2018 1 commit
  33. 16 Feb, 2018 1 commit
    • Colin Finck's avatar
      Remove the locks around current_task and finished_tasks. Check if we need to... · d9ed73bd
      Colin Finck authored
      Remove the locks around current_task and finished_tasks. Check if we need to wake up a CPU using the new is_halted variable (locked together with ready_queue).
      This reduces the average cycles for a getpid call from 147 to 28 on my ThinkPad X61 (Core 2 Duo T7300).
      Basically every syscall benefits from this change.
  34. 12 Feb, 2018 1 commit
    • Colin Finck's avatar
      Fix race conditions and synchronization. · f6de65d8
      Colin Finck authored
      * Previously, the scheduler switched to the idle task, reenabled interrupts, and later halted the CPU with a HLT instruction,
        all without any atomics. This resulted in Wakeup interrupts being received *BEFORE* going into the halt state, and then not
        waking up again.
        To solve this, I'm now using the newly implemented irq::enable_and_wait() to perform the "sti; hlt" sequence, which is
        atomic on x86 according to
      * This and other accesses to current_task can also happen concurrently, so guard it with an RwLock for minimum overhead.
  35. 07 Feb, 2018 1 commit
    • Colin Finck's avatar
      Get rid of the .percore section, MAX_CORES, and megabytes of statically allocated data. · 108d9c9d
      Colin Finck authored
      - Introduce a PerCoreVariables structure statically allocated for the Boot Processor and dynamically allocated when booting each Application Processor.
        Its address is stored in the GS register.
        All fields are of type PerCoreVariable<T>, which implements get() and set() functions for easy per-CPU access to the encapsulated variables.
      - Store the only mutable pointer to the current CPU's scheduler in PerCoreVariables.
        Introduce a function core_scheduler() for easy access. This saves us from doing a BTreeMap search every time we want to access the current CPU's scheduler and should improve performance.
        The SCHEDULERS BTreeMap now contains an immutable reference to each CPU scheduler. This way, we can be sure that only spinlock-guarded fields (like ready_queue) of a different CPU's scheduler can be modified.
      - Use a larger DEFAULT_STACK_SIZE for tasks and the KERNEL_STACK_SIZE only for booting (and later the Idle task).
        Due to Rust's missing support for code generic over different array sizes, I couldn't adapt the KernelStack struct for this and handle stacks as mere pointer addresses now.
      - Dynamically allocate the GDT and TSS to get rid of the MAX_CORES dependency.
      - Introduce an extra_flags parameter in mm::allocate to allow allocating executable and non-executable memory.
        Allocated stacks are now executable to possibly support dynamically generated code on the stack (see
      - Slim down boot.asm and entry.asm to not do any initialization twice that is later done in
        Enable EFER_NXE in Assembler though in order to allow early access to NX-protected memory.
      - Generate a next to the config.asm containing the configured stack sizes. Finally, you can adjust the stack sizes centrally in the CMake configuration again.
        We no longer need a config.h, so remove all references to it.
      - Get rid of and
        Replace the current_task_lwip_errno dummy by sys_lwip_get_errno() and sys_lwip_set_errno() syscalls and move the kputchar() into a proper sys_putchar() syscall.