diff --git a/building/README.md b/building/README.md index f261c244..1f5faed8 100644 --- a/building/README.md +++ b/building/README.md @@ -206,13 +206,13 @@ Firstly, you need to have the docker installed. export PATH="/Applications/Docker.app/Contents/Resources/bin:$PATH" ``` - It's recommended to place it in `.zshrc` startup script to export in every time during startup: + It's recommended to place it in `.zshrc` startup script to export it every time during startup: ```bash echo 'export PATH=/Applications/Docker.app/Contents/Resources/bin:$PATH' >> $HOME/.zshrc ``` -- Check if Docker is properly installed by checking version: +- Check if Docker is properly installed by checking its version: ``` bash docker --version @@ -308,7 +308,7 @@ There is a list of commands you can use to get them: on both Ubuntu and macOS ho *Note that you have to place the `gnubin` path that provides `make` before the `/usr/bin` in the `PATH` environment variable to use the `gnu` version (as it is done above). - Phoenix-RTOS requires the `endian.h` header, which may exist, but not be visible. If during the buildig you discover + Phoenix-RTOS requires the `endian.h` header, which may exist, but not be visible. If during the building you discover the following error: `fatal error: 'endian.h' file not found` please create the symlink to this header by the given command: diff --git a/building/script.md b/building/script.md index 541c4c67..c26793d9 100644 --- a/building/script.md +++ b/building/script.md @@ -9,7 +9,7 @@ TARGET=ia32-generic-qemu phoenix-rtos-build/build.sh all As you can see there can be other arguments like `all`. -You can also use the `clean` argument to clean last build artifacts. +You can also use the `clean` argument to clean the last build artifacts. ```bash TARGET=ia32-generic-qemu phoenix-rtos-build/build.sh clean all @@ -33,8 +33,8 @@ The available components are listed below: - `image` - system image to be loaded to the target, For example, in ia32-generic-qemu target `all` means `core fs image project ports`.
-For the other targets `all` can be different components configurations.
-You can also choose what components do you want to build, for example the following command will build a system image +For the other targets, `all` can be different components configurations.
+You can also choose what components you want to build, for example, the following command will build a system image without test and ports components. The `ports` component compiling process can take a while. If you need to build the system image quickly, you can use the command above. diff --git a/building/toolchain.md b/building/toolchain.md index f9d3ce58..a4b1db60 100644 --- a/building/toolchain.md +++ b/building/toolchain.md @@ -7,7 +7,7 @@ Phoenix-RTOS provides its toolchain, based on GNU CC. It's divided into the foll - riscv64-phoenix - sparc-phoenix -Each part delivers the tools required to compile for the given architecture simply. +Each part delivers the tools required to compile the given architecture simply. There are a few reasons why that is helpful - You can easily compile source code for a given Phoenix-RTOS platform, for example, ia32-generic-qemu: diff --git a/coding.md b/coding.md index 13f5915a..f5897184 100644 --- a/coding.md +++ b/coding.md @@ -126,7 +126,7 @@ Function should be not longer than 200 lines of code and not shorter than 10 lin ## Variables Variables should be named with one short words without the underline characters. If one word is not enough for variable -name then use camelCase. When defining a variable assign it a value, do not assume that its value is zero. **In the +name then use camelCase. When defining a variable, assign it a value, do not assume that its value is zero. **In the kernel code always initialize global/static variables in runtime.** There's not `.bss` and `.data` initialization in the kernel. diff --git a/corelibs/README.md b/corelibs/README.md index 57e08dec..32b357cf 100644 --- a/corelibs/README.md +++ b/corelibs/README.md @@ -7,7 +7,7 @@ GitHub repository. The example of usage can be found in the `_user` directory, placed in [phoenix-rtos-project](https://github.com/phoenix-rtos/phoenix-rtos-project). -Read more about reference project repository [here](../project.md). +Read more about the reference project repository [here](../project.md). There are following Phoenix-RTOS libraries: diff --git a/corelibs/libcache.md b/corelibs/libcache.md index 89d63728..ab09573f 100644 --- a/corelibs/libcache.md +++ b/corelibs/libcache.md @@ -227,11 +227,11 @@ into the buffer. The address of the first whole line is computed as follows: ```(address of the first byte to be read - offset of the first byte) + size of a cache line``` The addresses of following lines are computed by adding the size of a whole cache line to the address of a previous - line. Each of these addresses is mapped on to a specific cache set. Lookup in a set is performed according to the + line. Each of these addresses is mapped onto a specific cache set. Lookup in a set is performed according to the algorithm below: -1. The tag computed from the memory address becomes a part of a key used to perform binary search in a table of pointers -to cache lines sorted by the tag (dark gray table in the image above). +1. The tag computed from the memory address becomes a part of a key used to perform a binary search in a table of +pointers to cache lines sorted by the tag (dark gray table in the image above). 2. If a line marked by the tag is found, it becomes the MRU line. The pointers in the circular doubly linked list are rearranged so that this line is stored in the tail of the list. 3. The pointer to the found line is returned. @@ -244,10 +244,10 @@ Writing via the cache is implemented similarly to reading: data is written in th buffer. The user might want to update just a few bytes in a specific cache line, hence the line needs to be -found in the cache first. On success the bytes starting from the offset are updated and a chosen to write policy is +found in the cache first. On success, the bytes starting from the offset are updated and a chosen to write policy is executed. -If it happens that a line mapped from a specific address does not exist in the cache, it is +If it happens, that a line mapped from a specific address does not exist in the cache, it is created and written to the cache according to the algorithm below: 1. The pointer to the LRU line is removed from the circular doubly linked list and dereferenced to find a pointer (a @@ -277,7 +277,7 @@ cache clean instead. ### Cleaning the cache -The clean operation combines both cache flush and cache invalidate operations in atomic way while also providing a +The clean operation combines both cache flush and cache invalidate operations in an atomic way while also providing better efficiency than if the user were to perform cache flush followed by cache invalidate. ## Running tests diff --git a/corelibs/libgraph.md b/corelibs/libgraph.md index e411a44a..0c42a445 100644 --- a/corelibs/libgraph.md +++ b/corelibs/libgraph.md @@ -81,7 +81,7 @@ Examples of applications, which use graphics library (`ia32-generic-qemu` target Initializes the `graph_t` structure and opens a context for the specified graphics adapter. The uninitialized `graph_t` structure should be passed in the _`graph`_ argument and the graphics _`adapter`_ should be chosen from the following list: - - `GRAPH_NONE` - the graphics adapter isn't specified, in this case the function returns `-ENODEV` + - `GRAPH_NONE` - the graphics adapter isn't specified, in this case, the function returns `-ENODEV` - `GRAPH_VIRTIOGPU` - generic VirtIO GPU graphics adapter - `GRAPH_VGA` - generic VGA graphics adapter - `GRAPH_CIRRUS` - Cirrus Logic graphics adapter @@ -98,8 +98,8 @@ Examples of applications, which use graphics library (`ia32-generic-qemu` target - `int graph_mode(graph_t *graph, graph_mode_t mode, graph_freq_t freq)` Sets graphics mode with specified screen refresh rate frequency. The initialized _`graph`_ structure should be passed, - and _`mode`_ should be chosen from the `graph_mode_t` enum, placed in the `graph.h` header. The common graphics modes - are presented below: + and _`mode`_ should be chosen from the `graph_mode_t` enum, and placed in the `graph.h` header. The common graphics + modes are presented below: - `GRAPH_DEFMODE` - default graphics mode - `GRAPH_ON` - display enabled mode - `GRAPH_OFF` - display disabled mode @@ -157,20 +157,20 @@ color, graph_queue_t queue)` - `int graph_fill(graph_t *graph, unsigned int x, unsigned int y, unsigned int color, graph_fill_t type, graph_queue_t queue)` - Fills a closed figure with color specified in the _`color`_ argument ((_`x`_, _`y`_) should be any point inside the - figure to fill). The following `graph_fill_t` color fill methods are supported: + Fills a closed figure with the color specified in the _`color`_ argument ((_`x`_, _`y`_) should be any point inside + the figure to fill). The following `graph_fill_t` color fill methods are supported: - `GRAPH_FILL_FLOOD` - works like Windows paint bucket tool (floods homogeneous area, all pixels inside the polygon with color values same as the one at (_`x`_, _`y`_) flood origin point) - `GRAPH_FILL_BOUND` - fills the polygon until an edge of the same color as the fill color is found. It can't fill the - figure with color different from the figure boundary + figure with a color different from the figure boundary - `int graph_print(graph_t *graph, const graph_font_t *font, const char *text, unsigned int x, unsigned int y, unsigned char dx, unsigned char dy, unsigned int color, graph_queue_t queue)` Prints text pointed by the _`text`_ argument. Font data should be passed to `graph_font_t` structure. The example is stored in `gfx` directory in [phoenix-rtos-tests](https://github.com/phoenix-rtos/phoenix-rtos-tests.git) - repository (`font.h` file). The remaining arguments are similar to those from functions above. + repository (`font.h` file). The remaining arguments are similar to those from the functions above. - `int graph_move(graph_t *graph, unsigned int x, unsigned int y, unsigned int dx, unsigned int dy, int mx, int my, graph_queue_t queue)` @@ -182,15 +182,15 @@ graph_queue_t queue)` unsigned int dstspan, graph_queue_t queue)` Copies a bitmap pointed by the _`src`_ argument into bitmap pointed by the _`dst`_ argument. The area which is copied - is limited by a rectangle with _`dx`_ and _`dy`_ dimensions. There should also be specified span arguments, which is - the total width of a source/destination bitmap multiplied by its color depth. When copying some part of a bitmap, - _`src`_ should point to the proper element, same with destination buffer. + is limited by a rectangle with _`dx`_ and _`dy`_ dimensions. There should also be specified span arguments, which + represent the total width of a source/destination bitmap multiplied by its color depth. When copying some part + of a bitmap, _`src`_ should point to the proper element, and the same applies to the destination buffer. - `int graph_colorset(graph_t *graph, const unsigned char *colors, unsigned char first, unsigned char last)` Sets a color palette used for 8-bit indexed color mode. A color map should be passed in _`cmap`_ argument. The range of changing colors is set by passing _`first`_ and _`last`_ arguments. If a set color palette's size is lower than a - default one, remaining colors are the same. + default one, the remaining colors are the same. - `graph_colorget(graph_t *graph, unsigned char *colors, unsigned char first, unsigned char last)` @@ -202,9 +202,10 @@ int fg)` Sets cursor icon, _`amask`_ (`AND` mask) and _`xmask`_ (`XOR` mask) arguments determine the shape of the cursor. Default cursor shape is defined in `cursor.h` header file placed in `gfx` directory in `phoenix-rtos-tests` - repository. There is possibility to pass cursor colors - outline color (`bg` argument) and main color (`fg` argument). - The following color format should be applied: `0xAARRGGBB`, where `A` represents alpha, so when it's set to `0xff` - 100% opacity is provided. Opacity isn't supported for cirrus graphics adapter (default for `ia32-generic-qemu` target) + repository. There is a possibility to pass cursor colors - outline color (`bg` argument) and main color + (`fg` argument). The following color format should be applied: `0xAARRGGBB`, where `A` represents alpha, so when it's + set to `0xff` 100% opacity is provided. Opacity isn't supported for cirrus graphics adapter + (default for `ia32-generic-qemu` target) - `int graph_cursorpos(graph_t *graph, unsigned int x, unsigned int y)` @@ -253,7 +254,8 @@ int fg)` ## How to use the graphics library Few simple examples of `libgraph` functions usage. Default graphics adapter (`cirrus`) for `ia32-generic-qemu` running -script is used, default color depth is 4 bytes. Before calling mentioned functions following initialization was applied: +script is used, the default color depth is 4 bytes. Before calling mentioned functions the following initialization +was applied: ```c #include @@ -305,7 +307,7 @@ int main(void) - Printing text using libgraph - Header file with a font data in `graph_font_t` structure has to be included. The example of `font.h` is placed in + Header file with font data in `graph_font_t` structure has to be included. The example of `font.h` is placed in `gfx` directory in [phoenix-rtos-tests](https://github.com/phoenix-rtos/phoenix-rtos-tests) repository. ```C @@ -444,12 +446,12 @@ There are few steps to follow: - for other color depths - export the file to C source/header format (a dialog window pops up with additional options for color conversion) -- At this point image binary data should be available (either as array in `.c` or `.h` file or raw hex dump) +- At this point image binary data should be available (either as an array in `.c` or `.h` file or raw hex dump) - Custom image data formatting might be required -If the image bitmap is ready, there is possibility to display it using `graph_copy()`. Please see the proper example in -[How to use libgraph](#how-to-use-the-graphics-library) chapter. +If the image bitmap is ready, there is a possibility to display it using `graph_copy()`. Please see the proper example +in [How to use libgraph](#how-to-use-the-graphics-library) chapter. ## See also diff --git a/corelibs/libswdg.md b/corelibs/libswdg.md index afaa0d85..05cd2819 100644 --- a/corelibs/libswdg.md +++ b/corelibs/libswdg.md @@ -58,7 +58,7 @@ other operation. `chanCount` has to be greater than zero, `priority` has to be g ### Notes - All channels start disabled, -- Channel configuration does not change it's state, channel needs to be enabled if it was not prior, +- Channel configuration does not change its state, channel needs to be enabled if it was not prior, - Callback function **must not** call any libswdg functions! Deadlock will occur. ## Using libswdg @@ -86,4 +86,4 @@ int main() } ``` -Should `doAppStuff()` function hang/crash for more than 30 seconds, system will reset. +Should `doAppStuff()` function hang/crash for more than 30 seconds, the system will reset. diff --git a/corelibs/libuuid.md b/corelibs/libuuid.md index fff7bdb5..2fafe1a9 100644 --- a/corelibs/libuuid.md +++ b/corelibs/libuuid.md @@ -2,7 +2,7 @@ =================== -Linux libuuid compliant library used to generate unique identifiers for objects that may be accessible +Linux libuuid compliant library is used to generate unique identifiers for objects that may be accessible beyond the system. According to `RFC 4122` and `DCE 1.1` (Distributed Computing Environment) currently supported UUID format is variant 1, version 4 (randomly/pseudo-randomly generated). @@ -15,7 +15,7 @@ version 4 (randomly/pseudo-randomly generated). ## General information -Linux libuuid compliant library used to generate unique identifiers for objects that may be accessible beyond the +Linux libuuid compliant library is used to generate unique identifiers for objects that may be accessible beyond the system. According to `RFC 4122` and `DCE 1.1` (Distributed Computing Environment) currently supported UUID format is variant 1, version 4 (randomly/pseudo-randomly generated). diff --git a/devices/hwaccess.md b/devices/hwaccess.md index 06c5f40c..9c1ff13c 100644 --- a/devices/hwaccess.md +++ b/devices/hwaccess.md @@ -61,7 +61,7 @@ does not contain the `MAP_FAILED` value, which would indicate that `mmap` failed ### ISA without MMU -On architectures without `MMU` access to the hardware registers does not require prior memory mapping. Registers can be +On architectures without `MMU`, access to the hardware registers does not require prior memory mapping. Registers can be accessed by directly setting a volatile pointer to the desired physical base address. ## See also diff --git a/devices/interface.md b/devices/interface.md index efbc9d00..bdf94fc6 100644 --- a/devices/interface.md +++ b/devices/interface.md @@ -18,7 +18,7 @@ not have to create separate ports for them. The driver needs to assign each "fil Assume we want to create an SPI server that manages 2 instances of the device - spi0 and spi1. We can manage both using only one port by registering the same port as `/dev/spi0` with id = 1 and `/dev/spi1` with id = 2. Every message driver receives contains information to which `oid` (object ID) it has been sent. This enables the driver to recognize to -which special file message has been addressed to. +which special file message has been addressed. If the system does not have a root filesystem, a port can be registered within Phoenix native filesystem by using syscall @@ -67,8 +67,8 @@ Then we can create a new special file and register: ## Message types -There are several standard types of messages, although device driver servers need to implement an only subset of them. -With every message type there are 3 common fields: +There are several standard types of messages, although device driver servers need to implement only a subset of them. +With every message type, there are 3 common fields: - _`type`_ - type of message, - _`pid`_ - process ID of sender, diff --git a/hostutils/psdisk.md b/hostutils/psdisk.md index 180d1fa0..06a314dc 100644 --- a/hostutils/psdisk.md +++ b/hostutils/psdisk.md @@ -20,7 +20,7 @@ To generate an image with a flash memory size, the user should use `-o` option. ## Examples -The following example generates a partition table for MICRON MT25QL01GBBB. The size of the memory and sector bases on +The following example generates a partition table for MICRON MT25QL01GBBB. The size of the memory and sector based on data from ### Creating partition table diff --git a/kernel/hal/README.md b/kernel/hal/README.md index 55d2c5fe..8011652c 100644 --- a/kernel/hal/README.md +++ b/kernel/hal/README.md @@ -81,7 +81,7 @@ space is switched. Timer is the fundamental device for the operating system kernel. It is used for preemptive scheduling and time management. HAL is responsible for the implementation of two timers - a scheduler timer and high precision timer. -On some architectures, they can be based on one hardware device but commonly the are based on two separate devices. +On some architectures, they can be based on one hardware device, but commonly they are based on two separate devices. The interface provided for the upper layer unifies these devices and hides implementation details. HAL implements one function for operating on timers and defines two interrupt numbers respectively for timers used for diff --git a/kernel/hal/ia32.md b/kernel/hal/ia32.md index 07f3033e..61e35681 100644 --- a/kernel/hal/ia32.md +++ b/kernel/hal/ia32.md @@ -1,6 +1,6 @@ # HAL for IA32 based targets -HAL for IA32 architecture is located in `hal/ia32`. This chapter presents some important implementations issues. +HAL for IA32 architecture is located in `hal/ia32`. This chapter presents some important implementation issues. ## Initialization @@ -174,7 +174,7 @@ The context for IA32 has been presented below. } cpu_context_t; ``` -First part of the context is stored on the kernel stack automatically by CPU. After this part the general purpose +First part of the context is stored on the kernel stack automatically by CPU. After this part, the general purpose registers are stored. On top of the stack is pushed the stack pointer for context switching. ## See also diff --git a/kernel/proc/README.md b/kernel/proc/README.md index 65496e59..3c3890f7 100644 --- a/kernel/proc/README.md +++ b/kernel/proc/README.md @@ -78,8 +78,8 @@ services. Virtual addressing and private address spaces have also big impact on memory sharing. When a new process is created it can define its private map based on already allocated and named physical memory (see [Memory objects](../vm/objects.md)). This map can be derived from the map of parent process or can be established from -scratch. The smart use of copy-on-write technique allow to allocate the physical memory only for local modifications -made by process threads during their execution (see [Memory objects](../vm/objects.md)). +scratch. The smart use of copy-on-write technique allows for the allocation of physical memory only for local +modifications made by process threads during their execution (see [Memory objects](../vm/objects.md)). ## Process model on architectures not equipped with MMU @@ -128,19 +128,19 @@ transit into the execution mode defined by interrupt/exception/trap vector descr specified execution mode the processor programming model is extended with instructions specific for this mode and address spaces specific to this mode are accessible for the program. When execution on particular execution mode finishes program returns to the previous mode and restores previous program execution context. This return is performed -using special processor instruction. On most processors it is the instruction use to notify of the end of interrupt +using special processor instruction. On most processors, it is the instruction used to notify of the end of interrupt handling. ## Process separation Phoenix-RTOS process model based on address spaces complemented by execution modes constitutes a very powerful mechanism for program separation. Global address spaces can be selectively mapped into the linear address space of selected -processes. Private address spaces can effectively prevent the interference between processes, but they can be seamlessly +processes. Private address spaces can effectively prevent interference between processes, but they can be seamlessly used when MMU is available. Some address spaces (e.g. kernel address space) can be attributed with the processor execution mode required to -access to them. Using extended processor execution modes (e.g. ARM TrustZone or IA32 rings) the intermediate privilege -modes can be introduced. This technique allows to separate the sensitive parts or program executed within a process +access them. Using extended processor execution modes (e.g. ARM TrustZone or IA32 rings) the intermediate privilege +modes can be introduced. This technique allows for separating the sensitive parts or program executed within a process from other parts. Privileged and separated address spaces mapped into many processes can consist shared data and code used for example for emulation or to implement managed execution environments. diff --git a/kernel/proc/forking.md b/kernel/proc/forking.md index 71e6a4bc..e9751b6a 100644 --- a/kernel/proc/forking.md +++ b/kernel/proc/forking.md @@ -11,12 +11,12 @@ The well-known method of creating new process in general purpose operating syste The explanation of this method is quite simple. In the certain point of time a thread within a process calls `fork()` system call which creates a new process (child process) based on linear address space and operating system resources used by process calling `fork()` (parent process) and launches the thread within a child process. From this point of -time processes are separated, and they operate on their own address spaces. It means that all modification of process -memory are visible only within them. For example lets consider process A forking into processes A and B. After forking, -one of the threads of process A modifies variable located at address `addr` and stores there value 1 and thread of -process B modifies the same variable at address `addr` and stores there 2. The modification is specific for the forked -processes, and operating system assures that process A sees the variable located at `addr` as 1 and process B sees it as -2. +time processes are separated, and they operate on their own address spaces. It means that all modifications of process +memory are visible only within them. For example, let's consider process A forking into processes A and B. +After forking, one of the threads of process A modifies variable located at address `addr` and stores their value 1 +and thread of process B modifies the same variable at address `addr` and stores there 2. The modification is specific +for the forked processes, and operating system assures that process A sees the variable located at `addr` +as 1 and process B sees it as 2. This technique can be only implemented when processors are equipped with MMU providing mechanisms for memory virtualization (e.g. paging) which enables programs to use the same linear address to access different segments of @@ -26,29 +26,29 @@ physical memory. On processors lacked of MMU the `fork()` method is unavailable, Historically `vfork()` is designed to be used in the specific case where the child will `exec()` another program, and the parent can block until this happens. A traditional `fork()` requires duplicating all the memory of the parent -process in the child what leads to a significant overhead. The goal of the `vfork()` function was to reduce this -overhead by preventing unnecessary memory copying when new process is created. Usually after process creation using -`fork()` function a new program is executed. In such case traditional fork before `exec()` leads to unnecessary -overhead (memory is copied to the child process then is freed and replaced by new memory objects as the result of +process in the child which leads to significant overhead. The goal of the `vfork()` function was to reduce this +overhead by preventing unnecessary memory copying when new process is created. Usually, after process creation using +`fork()` function a new program is executed. In such case, traditional fork before `exec()` leads to unnecessary +overhead (memory is copied to the child process and then is freed and replaced by new memory objects as the result of `exec()`). In UN*X operating system history "The Mach VM system" added Copy On Write (COW), which made the `fork()` much cheaper, and in BSD 4.4, `vfork()` was made synonymous to `fork()`. -`vfork()` function has another important repercussions for non-MMU architectures. Because of semantics it allows to -launch a new process is the same way as using `fork()` what enables application portability. +`vfork()` function has another important repercussion for non-MMU architectures. Because of semantics, it allows +launching a new process in the same way as using `fork()` which enables application portability. Some consider the semantics of `vfork()` to be an architectural blemish and POSIX.1-2008 removed `vfork()` from the -standard and replaced by `posix_spawn()`. The POSIX rationale for the `posix_spawn()` function notes that that function, -which provides functionality equivalent to `fork()`+`exec()`, is designed to be implementable on systems that lack an -MMU. +standard and replaced it with `posix_spawn()`. The POSIX rationale for the `posix_spawn()` function notes that that +function, which provides functionality equivalent to `fork()`+`exec()`, is designed to be implementable on +systems that lack an MMU. ## Process termination Process can be terminated abnormally - as the consequence of receiving signal or normally after executing `exit()` function. When process exits all of its threads are terminated, all memory objects are unmapped and all resource handles are freed/closed. The parent process receives `SIGCHLD` signal notifying it about the child termination. `SIGCHLD` -signal plays other important role in process termination sequence. It allows to safe remove the remaining child +signal plays another important role in process termination sequence. It allows to safe remove the remaining child process resources which are not able to be removed during the process runtime (e.g. last thread kernel stack). ## Program execution @@ -56,7 +56,7 @@ process resources which are not able to be removed during the process runtime (e To execute a new program the binary object representing it should be mapped into the process linear address space and control have to be passed to the program entry point. This is the responsibility of `exec()` family functions. -On non-MMU architectures there is one important step performed after binary object is mapped and before control is +On non-MMU architectures, there is one important step performed after a binary object is mapped and before control is passed to the program entry point. This step is the program relocation which recalculates some program structures (e.g. `GOT`) used for accessing variables during the runtime. The relocation depends on the current memory location of program. diff --git a/kernel/proc/msg.md b/kernel/proc/msg.md index 07715203..62549a41 100644 --- a/kernel/proc/msg.md +++ b/kernel/proc/msg.md @@ -20,7 +20,7 @@ extern int proc_recv(u32 port, msg_t *msg, unsigned int *rid); extern int proc_respond(u32 port, msg_t *msg, unsigned int rid); ``` -Structure `msg_t` identifies message type and consist of two main parts - input part and output part. +Structure `msg_t` identifies message type and consists of two main parts - input part and output part. Input part points to the input buffer and defines its size. It contains also a small buffer for passing the message application header. The output part has symmetrical architecture to input buffer. It contains the pointer to output @@ -29,7 +29,7 @@ buffer, output buffer data length and buffer for output application header. When message is sent by the `proc_send` function the sending thread is suspended until the receiving thread executes `proc_recv` function, reads data from input buffer, writes the final answer to the output buffer and executes `proc_respond`. The `rid` word identifies the receiving context and should be provided to the `proc_respond` function. -There is possible to execute a lot of instruction between receiving and responding procedure. Responding function is +There is possible to execute a lot of instructions between receiving and responding procedures. Responding function is used to wake up the sending thread and inform it that data in output buffer are completed. To prevent copying of big data blocks over the kernel when communication goes between threads assigned to separate diff --git a/kernel/proc/namespace.md b/kernel/proc/namespace.md index 99c023fd..f1dcfd1e 100644 --- a/kernel/proc/namespace.md +++ b/kernel/proc/namespace.md @@ -1,16 +1,16 @@ # Kernel - Processes and threads - Namespace -The namespace and port registering functionality is used by operating system servers (e.g. device drivers, file servers) -as a basic method of integration with the other operating system components. For example if a thread working in the -process context opens the file given by specific path, it indirectly lookups for the port of the file server handling -this object and finally receives the `oid_t`(port, ID) structure identifying the file on the server. It is done because -the file server handling particular file during start registers its port in the namespace handled by the other server -or by the kernel. File server mount its namespace to the existing namespace handled by existing file servers. The -namespace mounting functionality is presented on the following picture. +The namespace and port registering functionality are used by operating system servers +(e.g. device drivers, file servers) as a basic method of integration with the other operating system components. +For example, if a thread working in the process context opens the file given by specific path, it indirectly lookups for +the port of the file server handling this object and finally receives the `oid_t`(port, ID) structure identifying the +file on the server. It is done because the file server handling particular file during start registers its port in the +namespace handled by the other server or by the kernel. File server mounts its namespace to the existing namespace +handled by existing file servers. The namespace mounting functionality is presented on the following picture. -In case of device drivers they register special names in the namespace and associate them with the specific `oids`. +In the case of device drivers, they register special names in the namespace and associate them with the specific `oids`. When program opens the file registered by a device driver it receives `oid` pointed directly to the device driver server, so all communication is redirected to this server. This idea has been briefly presented on following figure. diff --git a/kernel/proc/scheduler.md b/kernel/proc/scheduler.md index 8cc9baa5..b72062b3 100644 --- a/kernel/proc/scheduler.md +++ b/kernel/proc/scheduler.md @@ -18,7 +18,7 @@ the same priority. A scheduling algorithm is defined as follows: 2. The current thread's context for the interrupted core is saved and added to the end of its priority list. 3. The next available thread with the highest priority is selected to be run and is removed from the ready thread list. If a selected thread is a ghost (a thread whose process has ended execution) and has not been executed in a supervisor -mode, it is added to the ghosts list and the reaper thread is woke up. +mode, it is added to the ghosts list and the reaper thread woke up. 4. For the selected thread, the following actions are performed: * A global pointer to the current thread is changed to the selected one, * A pointer to the kernel stack is updated to the stack of a new thread, diff --git a/kernel/proc/sync.md b/kernel/proc/sync.md index fa744e71..674b49da 100644 --- a/kernel/proc/sync.md +++ b/kernel/proc/sync.md @@ -9,7 +9,7 @@ Spinlocks are used in kernel for active synchronization of instruction streams e processing cores. They are implemented using special processor instruction allowing to atomically exchange value stored in processor register with a value stored in a memory at specified linear address. This processor instruction belongs to the class of so-called `test-and-set` instructions introduced especially for synchronization purposes. Their logic -may slightly vary between specific processor architectures but the overall semantic remains consistent with atomic +may slightly vary between specific processor architectures, but the overall semantics remains consistent with atomic exchange between memory and processor register. Spinlocks are the basic method of synchronization used to implement mechanisms based on the thread scheduling. They are @@ -39,7 +39,7 @@ processor-specific assembly code. ``` Spinlock unlocking operation is quite simple. Processor atomically changes spinlock value in memory to non-zero and -restores its interrupt state based on state saved in spinlock. It is worth to add that operation on spinlock should +restores its interrupt state based on state saved in spinlock. It is worth adding that operation on spinlock should save and restore processor state from the variable assigned specifically for this particular processor. ```c @@ -51,10 +51,10 @@ save and restore processor state from the variable assigned specifically for thi Locks are used to synchronize access to critical sections inside kernel using scheduling mechanism. The main difference between locks and spinlocks is that they use passive waiting (removal from scheduler queues) instead of active waiting -(iterations until spinlock value becomes non-zero). Locks can be used only when process subsystem is initializes and +(iterations until spinlock value becomes non-zero). Locks can be used only when process subsystem is initialized and scheduler is working. -Each lock consist of spinlock, state variable and waiting queue. +Each lock consists of spinlock, state variable and waiting queue. ## Conditional variables diff --git a/kernel/syscalls/README.md b/kernel/syscalls/README.md index ea770bc0..c473314d 100644 --- a/kernel/syscalls/README.md +++ b/kernel/syscalls/README.md @@ -1,15 +1,15 @@ # System calls System call (commonly abbreviated to syscall) is an entry point to execute a specific user program's request to a -service from the kernel. The operating system kernel runs in a privileged mode to protect a sensitive software and +service from the kernel. The operating system kernel runs in a privileged mode to protect sensitive software and hardware parts from the other software components. A user application executing in an unprivileged mode does not have access to the protected data. Performing a hardware interrupt or conducting a trap handled by the kernel, the user -application can obtain sensitive data from the kernel, e.g. an information about all processes running in the system. +application can obtain sensitive data from the kernel, e.g. information about all processes running in the system. ## Prototypes and definition In Phoenix-RTOS prototypes and definitions of the system calls are located in the `libphoenix` library. A list of -the all system calls is placed in a `phoenix-rtos-kernel/include/syscalls.h` header files, grouped by categories. +all system calls is placed in a `phoenix-rtos-kernel/include/syscalls.h` header files, grouped by categories. System call prototypes should be placed in the appropriate header file in the `libphoenix` standard library, referring to the syscall's category. diff --git a/kernel/vm/README.md b/kernel/vm/README.md index 20a4371b..a608ba3d 100644 --- a/kernel/vm/README.md +++ b/kernel/vm/README.md @@ -8,7 +8,7 @@ In most modern general-purpose operating systems, memory management is based on Management Unit (MMU) is used. The MMU is available across many popular hardware architectures (e.g. IA32, x86-64, ARMv7 Cortex-A, RISC-V) and is used for translating the linear addresses used by programs executed on the processor core into the physical memory addresses. This translation is based on linear-physical address associations defined inside MMU -which are specific for each running process allowing to separate them from each-other. The evolution of paging +which are specific for each running process allowing to separate them from each other. The evolution of paging technique and current use of it in general purpose operating systems are briefly discussed in the further parts of this chapter. @@ -16,14 +16,14 @@ The assumption of use of paging technique as the basic method of accessing the m insufficient when operating system shall handle many hardware architectures starting from low-power microcontrollers and ending with advanced multicore architectures with gigabytes of physical memory because MMU is available only on some of them. Moreover, many modern architectures used for IoT device development and massively parallelized multicore -computers are equipped with a non-uniform physical memory (NUMA) with different access characteristics. For example in -modern microcontrollers some physical memory segments can be tightly coupled with processor enabling to run real-time -application demanding minimal jitter (e.g. for signal processing). On multicore architectures some physical memory -segment can be tightly coupled with particular set of processing cores while others segments can be accessible over -switched buses what results in delayed access and performance degradation. Having this in mind in Phoenix-RTOS it was -decided to redefine the traditional approach to memory management and some new memory management abstractions and -mechanisms were proposed. These abstractions and mechanisms allow to unify the approach for memory management on many -types of memory architectures. To understand the details and purpose of these mechanism memory hardware architecture +computers are equipped with a non-uniform physical memory (NUMA) with different access characteristics. For example, +in modern microcontrollers, some physical memory segments can be tightly coupled with processor enabling to run +real-time application demanding minimal jitter (e.g. for signal processing). On multicore architectures, some physical +memory segments can be tightly coupled with particular set of processing cores while others segments can be accessible +over switched buses which results in delayed access and performance degradation. Having this in mind in Phoenix-RTOS it +was decided to redefine the traditional approach to memory management and some new memory management abstractions and +mechanisms were proposed. These abstractions and mechanisms allow unifying the approach for memory management on many +types of memory architectures. To understand the details and purpose of these mechanisms memory hardware architecture issues are briefly discussed in this chapter before Phoenix-RTOS memory management functions are briefly presented. ## Paging technique and Memory Management Unit @@ -41,13 +41,13 @@ associations defined for the new process chosen for execution. The structure use commonly and incorrectly called page table and stored in physical memory. On many architectures associations used for defining the linear address space are automatically downloaded to MMU when linear address is reached for the first time and association is not present in MMU. This task is performed by a part of MMU called Hardware Page Walker. On some -architectures with simple MMU (e.g. eSI-RISC) the operating system define associations by controlling MMU directly +architectures with simple MMU (e.g. eSI-RISC) the operating system defines associations by controlling MMU directly using its registers. In this case page table structure depends on software. The role of the MMU in memory address translation is illustrated in the figure below. -In the further considerations the linear address space defined using paging technique will be named synonymously as the +In further consideration, the linear address space defined using paging technique will be named synonymously as the virtual address space. ### Initial concept of paging technique @@ -68,8 +68,8 @@ resumed. The original paging technique is presented below. Over the years, paging has morphed into a technique used for defining the process memory space and for process separation. In general-purpose operating systems, paging is fundamental for memory management. Each process runs in its -own virtual memory space and uses all address ranges for their needs. The address space is defined by a set of -virtual-to-physical address associations for the MMU defined in the physical memory and stored in a structure which is +own virtual memory space and uses all address ranges for its needs. The address space is defined by a set of +virtual-to-physical address associations for the MMU defined in the physical memory and stored in a structure that is much more complicated than a page table used in early computers. This is necessary in order to optimize memory consumption and speed up the virtual-to-physical memory translations. When a process is executed on a selected processor, the address space is switched to its virtual space, which prevents it from interfering with other processes. @@ -79,39 +79,39 @@ into two or more processes to minimize the overall memory usage. -A memory management system which relies on paging describes the whole physical memory using physical pages. +A memory management system that relies on paging describes the whole physical memory using physical pages. ## Direct physical memory access complemented with Memory Protection Unit or segmentation Entry-level microcontrollers based on ARM Cortex-M architecture massively used in common electronics devices are typically equipped with embedded FLASH memories and tens of kilobytes of SRAM. Both FLASH and SRAM are accessible using -the same address space. Because of small amount of RAM the MMU is useless and can lead to memory usage overhead. +the same address space. Because of small amount of RAM, the MMU is useless and can lead to memory usage overhead. To provide separation of running processes and used by them physical memory the Memory Protection Unit is used. Memory Protection Unit takes part in memory addressing and typically allows partitioning memory access by defining set of -segments which can be used during the program execution. The number of these segments is usually limited (4, 8, 16). The +segments that can be used during the program execution. The number of these segments is usually limited (4, 8, 16). The access to defined memory segments can be even associated with processor execution mode, so program executed in supervisor mode can operate on more segments than when it is executed in user mode. Processor execution modes and methods of transitioning between them have been Discussed in chapter [Processes and threads](proc/README.md). -There are two strategies of using MPU and memory segmentation. First and more flexible is strategy of switching whole +There are two strategies for using MPU and memory segmentation. First and more flexible is strategy of switching whole MPU context when process context is switched. This demands to reload of all segments defined by MPU for the executed process with the new set defined for the newly chosen process. The big advantage of this technique is that processes are strictly separated because each of them uses its own set of user segments and shares the privileged segments (e.g. kernel segments). There are two disadvantages of this technique caused by typical MPU limitations. The process of redefining segments is slow because it requires the invalidation of cache memories. The method of defining segments in -MPU is limited because segment address and size definition depends on chosen granulation. For example some segments can +MPU is limited because segment address and size definition depend on chosen granulation. For example, some segments can be defined only if all set of MPU registers is used. The second strategy, used in Phoenix-RTOS, is based on memory regions/segments defined for the whole operating system and shared between processes. It means that processes can use during its execution few assigned predefined memory regions called further memory maps. These regions are defined during operating system bootstrap with respect of MPU -register semantics and can be inline with physical memory characteristic. For example separate regions can be defined +register semantics and can be inline with physical memory characteristics. For example, separate regions can be defined for TCM (Tightly Coupled Memory) with short access time and separate regions can be defined for cached RAM. The set of regions assigned to the process is defined during process start-up. When a process context is switched the proper process set of MPU definition is activated using enable/disable bits. This can be done much faster in comparison to reloading the whole region definitions. The main disadvantage of described approach is the limited number of regions -which can be used by running processes. This results in a weaker separation of running processes because they must use +that can be used by running processes. This results in a weaker separation of running processes because they must use shared memory regions and erroneous thread of one process can destroy data in another process sharing the same memory region. But it should be emphasized that this technique complemented with proper definition of memory regions can allow fulfilling safety requirements of many types of applications e.g. applications based on software partitioning into two @@ -120,7 +120,7 @@ parts with different safety requirements. Discussion of techniques used for protecting memory when direct physical memory access is used should be complemented by presentation of memory segmentation mechanisms popularized widely by x86 microprocessors. The segmentation technique was developed on early computers to simplify the program loading and execution process. When a process is to be executed, -its corresponding segmentation are loaded into non-contiguous memory though every segment is loaded into a contiguous +its corresponding segmentation is loaded into non-contiguous memory though every segment is loaded into a contiguous block of available memory. The location of program segments in physical memory is defined by special set of processor registers called segment registers and processor instructions are using offsets calculated according to segment base addresses. This technique prevents from using relocation/recalculation of program addresses after it is loaded to memory @@ -147,7 +147,7 @@ These functions will be briefly discussed and elaborated more in the particular Physical memory allocation is the lowest level of the memory management subsystem. It is used to provide memory for the purposes of kernel or mapped object. They are two ways of obtaining physical memory depending on the type of hardware -memory architectures. When paging technique is used the memory is allocated using physical memory pages (page frames). +memory architecture. When paging technique is used the memory is allocated using physical memory pages (page frames). On architecture with direct physical memory access the physical memory is allocated using address space allocation in the particular memory map. It is planned to generalize these techniques in the next version of Phoenix-RTOS memory management subsystem. To understand the physical memory allocation algorithm on architectures using paging technique @@ -177,7 +177,7 @@ constitute the next layers of the memory management subsystem. ### Memory objects -First introduced in the Mach operating system, a memory object defines an entity containing data which could be +First introduced in the Mach operating system, a memory object defines an entity containing data that could be partially or completely loaded into the memory and mapped into one or more address spaces. A good example of this is a binary object representing the program image (e.g. `/bin/sh`). diff --git a/kernel/vm/kmalloc.md b/kernel/vm/kmalloc.md index f511d2d8..4ae68d3e 100644 --- a/kernel/vm/kmalloc.md +++ b/kernel/vm/kmalloc.md @@ -3,23 +3,23 @@ Fine-grained allocator implemented by `vm_kmalloc()` function is the main method of dynamic memory allocation used by the Phoenix-RTOS kernel. The operating system kernel uses dynamic data structures to manage dynamic data structures created during the operating system runtime (e.g. process descriptors, threads descriptors, ports). Size of these -structure varies from few bytes to tens of kilobytes. The allocator is able to allocate either the group of memory pages -and to manage the fragments allocated within the page. +structures varies from few bytes to tens of kilobytes. The allocator can allocate either the group of memory pages and +manage the fragments allocated within the page. ## Architecture Fine-grained allocator is based on zone allocator. The architecture is presented on the following picture. -Main allocator data structure is `sizes[]` table. Table entries points to list of zone allocators consisting fragments +Main allocator data structure is `sizes[]` table. Table entries point to list of zone allocators consisting of fragments with sizes proportional to the entry number. Fragments have sizes equal to `2^e` where `e` is the entry number. ## Memory allocation -The first step of allocation process is the calculation of entry number. The best fit strategy is used, so the requested -size is rounded to the nearest power of two. After calculating the entry number the fragment is allocated from the first -zone associated with the entry number. +The first step of the allocation process is the calculation of entry number. The best fit strategy is used, so the +requested size is rounded to the nearest power of two. After calculating the entry number the fragment is allocated +from the first zone associated with the entry number. -If the selected entry is empty and there is no empty zones associated with the entry, the new zone is created and added +If the selected entry is empty and there are no empty zones associated with the entry, the new zone is created and added to the list. New zone is added either to `sizes[]` table and to the zone RB-tree. The zone RB-tree is used to find the proper zone when a fragment is released. diff --git a/kernel/vm/mapper.md b/kernel/vm/mapper.md index 57ccf447..976f521c 100644 --- a/kernel/vm/mapper.md +++ b/kernel/vm/mapper.md @@ -16,7 +16,7 @@ A memory map (`vm_map_t` structure) is the main structure used for describing th red-black tree structure, the memory map stores entries (`map_entry_t`) which describe the memory segments. The memory map belongs both to the kernel and processes. In non-MMU architectures, the kernel and processes share the same memory map. In MMU architectures, each process has its own separate memory map defining the user mappings. The kernel uses a -separate memory map which describes the parts of the address space which belong to the kernel. +separate memory map that describes the parts of the address space which belong to the kernel. The map definition and its entry is presented below. diff --git a/kernel/vm/objects.md b/kernel/vm/objects.md index 5ffb276c..7a5f6d5b 100644 --- a/kernel/vm/objects.md +++ b/kernel/vm/objects.md @@ -2,16 +2,16 @@ Memory objects were introduced to share the physical memory between processes allowing to identify the sets of allocated memory pages or segments of physical memory on non-MMU architectures. When process maps object into its memory -space kernel allocates physical memory for the objects data and copies it from the backing storage (e.g. filesystem). -When other process maps the same object into its address space the most of already allocated memory for the object +space kernel allocates physical memory for the object's data and copies it from the backing storage (e.g. filesystem). +When other process maps the same object into its address space most of the already allocated memory for the object purposes can be shared. Only the memory for local in-process modifications should be allocated. The technique used for allocating the memory for the purposes of in-process object modifications when the write access is performed is known as copy-on-write. It is based on some features of MMU or segment management unit -Memory objects are used to optimize the memory usage. They are also used as the basis for shared libraries. The shared -libraries are the libraries loaded during the process execution. They are loaded using object mapping technique, what -result that only one library code instance exists in memory. Library data segments are allocated using copy-on-write -technique. To use the shared library in the process context the dynamic linking should be performed. +Memory objects are used to optimize memory usage. They are also used as the basis for shared libraries. The shared +libraries are the libraries loaded during the process execution. They are loaded using object mapping technique, which +results that only one library code instance exists in memory. Library data segments are allocated using copy-on-write +technique. To use the shared library in the process context dynamic linking should be performed. Memory objects were introduced in Mach operating system. They were quickly derived from it and implemented in UN*X BSD and other operating systems. The Mach and BSD implementations were not optimal because of the way of implementation of @@ -25,7 +25,7 @@ microkernel. Process’s address space is constituted by set of mapped objects. In traditional operating system memory objects correspond with files or devices (and are identified by `vnode`, `inode` etc.) or with anonymous objects representing -the dynamically allocated physical memory. There are two strategies of retrieving object data into the process memory – +the dynamically allocated physical memory. There are two strategies for retrieving object data into the process memory – immediate retrieval strategy when object is mapped (e.g. during process start) and lazy on-demand retrieval strategy when virtual page is first-time accessed during the runtime. @@ -46,8 +46,8 @@ process stack (stack used by the main thread). As well as `bss` and `heap` segme object. After the stack kernel segments are mapped. These segments are inaccessible when the thread runs on the user-level. When control is transferred explicitly to the kernel via the system call or implicitly via interrupt, the executed program is able to access this memory. The described mechanism of separation of the kernel from user memory -is the basic mechanism constituting the operating system security and reliability and preventing the interference -between the operating system and processes. +is the basic mechanism constituting the operating system's security and reliability and preventing interference between +the operating system and processes. The process address space in Phoenix-RTOS is presented on the following figure. @@ -62,7 +62,7 @@ process at requested virtual address. The main difference between the monolithic kernel approach and Phoenix-RTOS is that memory segments correspond to objects identified by oids (port and in-server ID) handled by external servers, so operating system kernel is free of file abstraction. This allows to maintain the small size of kernel and emulate many file sharing and inheritance -strategies on the user level (POSIX, Windows etc.) or event to create the final operating system lacked of filesystem +strategies on the user level (POSIX, Windows etc.) or event to create the final operating system lacking of filesystem abstraction. ## Memory objects in Mach/BSD operating systems @@ -105,8 +105,8 @@ The following figure shows how shadow object chains are formed in BSD VM. -A three-page file object is copy-on-write memory mapped into a process’ address space. The first column shows the first -step of memory mappings. The new entry with the needs-copy and copy-on-write flags is allocated. It points the +A three-page file object is copy-on-write memory mapped into a process’s address space. The first column shows the first +step of memory mappings. The new entry with the needs-copy and copy-on-write flags is allocated. It points to the underlying object. Once a writ fault occurs, a new memory object is created and that object tracks all the pages that have been copied and modified. @@ -119,7 +119,7 @@ read-write into the faulting process’ address space. The third column shows the BSD VM data structures after the process with the copy-on-write mapping forks a child, the parent writes to the middle page, and the child writes to the right-hand page. When the parent forks, the child receives -a copy-on-write copy of the parent’s mapping. This is done by write protecting the parent’s mappings and setting +a copy-on-write copy of the parent’s mapping. This is done by writing protecting the parent’s mappings and setting needs-copy in both processes. When the parent faults on the middle page, a second shadow object is allocated for it and inserted on top of the first shadow object. When the child faults on the right-hand page the same thing happens, resulting in the allocation of a third shadow object. @@ -127,7 +127,7 @@ resulting in the allocation of a third shadow object. Shadow objects are very problematic In terms of operating system efficiency and resource management. Presented copy-on-write mechanism can leak memory by allowing pages that are no longer accessible to remain within an -object chain. In the example the remaining shadow object chain contains three copies of the middle page, but only two +object chain. In the example, the remaining shadow object chain contains three copies of the middle page, but only two are accessible. The page in the first shadow object is no longer accessible and should be freed to prevent the memory leak. BSD VM attempts to collapse a shadow object chain when it is possible (e.g. when new shadow object is created), but searching for objects that can be collapsed is a complex process. @@ -168,7 +168,7 @@ the process context is presented on the following figure. There are three main differences between UVM and Phoenix-RTOS memory objects. Objects are identified by oid_t and -handled by external servers and data is fetched and stored using message passing. Processes are not swap'able, so there +handled by external servers and data is fetched and stored using message passing. Processes are not swappable, so there is no swap server for anonymous objects. Memory objects are supported as well on non-MMU architectures, but functionality is simplified. diff --git a/kernel/vm/page.md b/kernel/vm/page.md index 76c0b347..029bf73a 100644 --- a/kernel/vm/page.md +++ b/kernel/vm/page.md @@ -127,10 +127,10 @@ is illustrated below. The first page set is removed from the list and divided into two 64 KB regions. The upper 64 KB region is added to the `size[16]` entry and then split. The first 64 KB region is split into two 32 KB regions. The upper 32 KB region is -returned to the `size[15]` entry. Next, the first half of the region is divided into two 16 KB regions, and finally +returned to the `size[15]` entry. Next, the first half of the region is divided into two 16 KB regions, and finally, only one page is available. This page is returned as an allocation result. The complexity of this allocation is -O(log2N). The maximum number of steps which should be performed is the size of `size[]` array minus +O(log2N). The maximum number of steps that should be performed is the size of `size[]` array minus the log2(page size). The maximum cost of page allocation on a 32-bit address space is 20 steps. @@ -141,7 +141,7 @@ Page deallocation is defined as the process opposite to the page allocation proc ### Sample deallocation Let us assume that the page allocated in the previous section must be released. The first step is to analyze the -neighborhood of the page based on the `pages[]` array. The array is sorted and it is assumed that the +neighborhood of the page based on the `pages[]` array. The array is sorted, and it is assumed that the next page for the released `page_t` is the `page_t` structure, describing the physical page located right after the released page or the page located on higher physical addresses. If the next `page_t` structure describes the neighboring page, and if it is marked as free, the merging process is performed. The next page is removed from @@ -174,8 +174,8 @@ created, assuming a given page size. The number of `page_t` entries is proportio } ``` -The assumed page size depends on the architecture and the available memory size. For microcontrollers with a small -memory size, the page size is typically 256 bytes. +The assumed page size depends on the architecture and the available memory size. For microcontrollers with small memory +sizes, the page size is typically 256 bytes. Page allocation is quite simple. It just retrieves the first `page_t` entry from the pool. In the deallocation process, a `page_t` is returned to the pool. It must be noted that the real memory allocation is performed during the memory diff --git a/libc/functions/a/acos.part-impl.md b/libc/functions/a/acos.part-impl.md index e44d484a..8939c7f1 100644 --- a/libc/functions/a/acos.part-impl.md +++ b/libc/functions/a/acos.part-impl.md @@ -29,7 +29,7 @@ before calling these functions. On return, if `errno` is non-zero or Upon successful completion, these functions shall return the arc cosine of _x_, in the range `[0, pi]` radians. -For finite values of _x_ not in the range `[-1,1]`, a domain error shall occur, and either a `NaN` (if supported), or +For finite values of _x_ not in the range `[-1,1]`, a domain error shall occur, and either a `NaN` (if supported) or an implementation-defined value shall be returned. * If _x_ is `NaN`, a `NaN` shall be returned. diff --git a/libc/functions/b/bind.part-impl.md b/libc/functions/b/bind.part-impl.md index 2dd2f9b7..65b5d637 100644 --- a/libc/functions/b/bind.part-impl.md +++ b/libc/functions/b/bind.part-impl.md @@ -50,7 +50,7 @@ only by their address family. [`EINPROGRESS`] - `O_NONBLOCK` is set for the file descriptor for the socket and the assignment cannot be immediately performed; the assignment is performed asynchronously. -[`EINVAL`] - the socket is already bound to an address, and the protocol does not support binding to a new address; or +[`EINVAL`] - the socket is already bound to an address, and the protocol does not support binding to a new address, or the socket has been shut down. [`ENOBUFS`] - insufficient resources were available to complete the call. diff --git a/libc/functions/b/bsearch.impl.md b/libc/functions/b/bsearch.impl.md index bf97ee5b..84f7a091 100644 --- a/libc/functions/b/bsearch.impl.md +++ b/libc/functions/b/bsearch.impl.md @@ -21,8 +21,8 @@ member that matches the object pointed to by _key_. The size (in bytes) of each by _size_. The contents of the array should be in ascending sorted order according to the comparison function referenced by -`compar`. The `compar` routine is expected to have two arguments which point to the key object and to an array member, -in that order. It should return an integer which is less than, equal to, or greater than zero if the key object is +`compar`. The `compar` routine is expected to have two arguments that point to the key object and to an array member, +in that order. It should return an integer that is less than, equal to, or greater than zero if the key object is found, respectively, to be less than, to match, or be greater than the array member. ### Return value diff --git a/libc/functions/c/calloc.part-impl.md b/libc/functions/c/calloc.part-impl.md index 998ec8af..8154500a 100644 --- a/libc/functions/c/calloc.part-impl.md +++ b/libc/functions/c/calloc.part-impl.md @@ -18,7 +18,7 @@ The purpose is to allocate a memory. The `calloc()` function shall allocate unus elements each of whose size in bytes is _elsize_. The space shall be initialized to all bits `0`. -The order and contiguity of storage allocated by successive calls to `calloc()` is unspecified. The pointer returned if +The order and contiguity of storage allocated by successive calls to `calloc()` are unspecified. The pointer returned if the allocation succeeds shall be suitably aligned so that it may be assigned to a pointer to any type of object and then used to access such an object or an array of such objects in the space allocated (until the space is explicitly freed or reallocated). Each such allocation shall yield a pointer to an object disjoint from any other object. The pointer diff --git a/libc/functions/c/condSignal.phrtos.md b/libc/functions/c/condSignal.phrtos.md index 9c555579..114cb646 100644 --- a/libc/functions/c/condSignal.phrtos.md +++ b/libc/functions/c/condSignal.phrtos.md @@ -25,8 +25,9 @@ variable _h_ (if any threads are blocked on _h_). If more than one thread is blocked on a condition variable, the scheduling policy shall determine the order in which threads are unblocked. When each thread unblocked as a result of a `condBroadcast()` or `condSignal()` returns from its -call to `condWait()`, the thread shall own the mutex with which it called `condWait()`. The thread(s) that are unblocked -shall contend for the mutex according to the scheduling policy (if applicable), and as if each had called `mutexLock()`. +call to `condWait()`, the thread shall own the mutex with which it is called `condWait()`. The thread(s) that are +unblocked shall contend for the mutex according to the scheduling policy (if applicable), +and as if each had called `mutexLock()`. The `condBroadcast()` or `condSignal()` functions may be called by a thread whether it currently owns the mutex that threads calling `condWait()` have associated with the condition variable diff --git a/libc/functions/c/condWait.phrtos.md b/libc/functions/c/condWait.phrtos.md index a26f129f..8d68964d 100644 --- a/libc/functions/c/condWait.phrtos.md +++ b/libc/functions/c/condWait.phrtos.md @@ -55,7 +55,7 @@ The behavior is undefined if the value specified by the _h_ or mutex _m_ argumen to an initialized condition variable or an initialized mutex object, respectively. If _timeout_ is nonzero `-ETIME` is returned if condition is not signaled after waiting for _timeout_ microseconds. Zero -_timeout_ waits indefinitely until condition is signalled. Note that due to internal implementation timeout is restarted +_timeout_ waits indefinitely until condition is signaled. Note that due to internal implementation timeout is restarted when signal is received (retry on `EINTR`). ## Return value diff --git a/libc/functions/c/crypt.part-impl.md b/libc/functions/c/crypt.part-impl.md index b7be9f61..c93619b5 100644 --- a/libc/functions/c/crypt.part-impl.md +++ b/libc/functions/c/crypt.part-impl.md @@ -41,7 +41,7 @@ error. The `crypt()` function shall fail if: -* `ENOSYS` - The functionality is not supported on this implementation. +* `ENOSYS` - The functionality is not supported in this implementation. ## Tests diff --git a/libc/functions/d/dprintf.part-impl.md b/libc/functions/d/dprintf.part-impl.md index fd0b9d16..6b6d32d0 100644 --- a/libc/functions/d/dprintf.part-impl.md +++ b/libc/functions/d/dprintf.part-impl.md @@ -89,7 +89,7 @@ digit string is treated as zero. If a precision appears with any other conversio * A conversion specifier character that indicates the type of conversion to be applied. -A field width, or precision, or both, may be indicated by a (`'*'`). In this case an argument of type +A field width, or precision, or both, may be indicated by a (`'*'`). In this case, an argument of type `int` supplies the field width or precision. Applications ensure that arguments specifying field width, or precision, or both appear in that order before the argument, if any, to be converted. A negative field width is taken as a `'-'` flag followed by a positive field width. A negative precision is taken as if the precision were omitted. In format diff --git a/libc/functions/f/freopen.part-impl.md b/libc/functions/f/freopen.part-impl.md index a0613469..b20b4e0b 100644 --- a/libc/functions/f/freopen.part-impl.md +++ b/libc/functions/f/freopen.part-impl.md @@ -54,7 +54,7 @@ returned, and `errno` shall be set to indicate the error. The `freopen()` function shall fail if: -* `EACCES` - Search permission is denied on a component of the path prefix, or the file exists and the permissions +* `EACCES` - Search permission is denied on a component of the path prefix, or the file exists, and the permissions specified by _mode_ are denied, or the file does not exist and write permission is denied for the parent directory of the file to be created. diff --git a/libc/functions/func_template.md b/libc/functions/func_template.md index ee153822..379f829a 100644 --- a/libc/functions/func_template.md +++ b/libc/functions/func_template.md @@ -11,7 +11,7 @@ Partially implemented - + ## Conformance IEEE Std 1003.1-2017 diff --git a/libc/functions/s/strlen.part-impl.md b/libc/functions/s/strlen.part-impl.md index 45eb679f..eea737ef 100644 --- a/libc/functions/s/strlen.part-impl.md +++ b/libc/functions/s/strlen.part-impl.md @@ -19,7 +19,7 @@ IEEE Std 1003.1-2017 The `strlen()` function shall compute the number of bytes in the string to which _s_ points, not including the terminating `NUL` character. The -`strnlen()` function shall compute the smaller of the number of bytes in the array to which _s_ points, not including +`strnlen()` function shall compute the smallest of the number of bytes in the array to which _s_ points, not including any terminating `NUL` character, or the value of the _maxlen_ argument. The `strnlen()` function shall never examine more than _maxlen_ bytes of the array pointed to by _s_. diff --git a/libc/posix.md b/libc/posix.md index 3c77fca4..6b9a443d 100644 --- a/libc/posix.md +++ b/libc/posix.md @@ -12,7 +12,7 @@ The purpose of `posixsrv` is to store data that can be shared between processes, It also registers and handles special files, such as `/dev/null` or `/dev/random`. -In the current implementation some parts of `posixsrv` functionality is kept inside the kernel and accessed using a set +In the current implementation, some parts of `posixsrv` functionality is kept inside the kernel and accessed using a set of system calls. Future implementations will instead delegate requests directly to `posixsrv`. ## Source code diff --git a/loader/README.md b/loader/README.md index 49c1cd80..fa40f7bd 100644 --- a/loader/README.md +++ b/loader/README.md @@ -31,7 +31,7 @@ Acting as a first-stage, plo configures the memory controllers and a variety of platform. It is also responsible for setting the initial processor's clocks values and preparing the board for the kernel. The loader runs in a supervisor mode and doesn't support FPU and MMU on all architectures. -During the second-stage booting, it loads the operating system and selected applications from storage devices or via +During the second-stage of booting, it loads the operating system and selected applications from storage devices or via interfaces like serial or USB (acting as USB client) to the memory. For more complex platforms, additional work can be performed like loading bit stream to FPGA or testing specific components. diff --git a/loader/architecture.md b/loader/architecture.md index 3afe18fb..16102231 100644 --- a/loader/architecture.md +++ b/loader/architecture.md @@ -33,7 +33,7 @@ Devices are the hardware dependent subsystem containing a collection of drivers loader components. Each driver has to register itself using constructor invocation. During bootloader initialization, the registered devices are initialized and appropriate `major.minor` numbers are assigned to them. The other plo's components refer to specific devices using `major.minor` identification. The minor number indicates on the device -instance and are assigned dynamically. However, the major numbers are static and refer to the following device types: +instance and is assigned dynamically. However, the major numbers are static and refer to the following device types: * `0` - UART * `1` - USB @@ -77,7 +77,7 @@ PLLs, external memory controllers like DDR and preparing other crucial component Console is used for presenting plo messages until the device driver for the console is initialized. It is typically based on UART, but it can use other display devices (on IA32 there is a console based on VGA graphics adapter and -keyboard). Initially the console should be kept as simple as possible, so it works from the early boot stage. It +keyboard). Initially, the console should be kept as simple as possible, so it works from the early boot stage. It does not use interrupts or other HAL mechanisms, nor allow the loader to read data. ### Strings @@ -104,7 +104,7 @@ Common routines contain the following units: * `circular buffer` - basic interface to push and pop data to buffer -* `console` - unit sets console to specific device and print data on it +* `console` - unit sets console to specific device and prints data on it * set of functions to handle `character types` diff --git a/loader/cli.md b/loader/cli.md index 87f1f98c..a54a495e 100644 --- a/loader/cli.md +++ b/loader/cli.md @@ -30,7 +30,7 @@ List all the available commands in plo (some of them are available only on the s * `phfs` - registers device in phfs, usage: `phfs [ [protocol]]` * `script` - shows script, usage: `script [ ]` * `test-ddr` - perform test DDR, usage: `test-ddr` -* `wait` - waits in milliseconds or in infinite loop, usage: `wait [ms]` +* `wait` - waits in milliseconds or in an infinite loop, usage: `wait [ms]` ## See also diff --git a/lwip/lwip-pppou.md b/lwip/lwip-pppou.md index 171e7bea..93744fd6 100644 --- a/lwip/lwip-pppou.md +++ b/lwip/lwip-pppou.md @@ -6,7 +6,7 @@ of the lack of a proper interface (Ethernet, Wi-Fi) the use of uart in conjunction with an appropriate adapter, be it USB, Bluetooth or optical/infrared uart may be the easiest to connect both worlds. -Almost every microcontroller has at least one uart, may not have Ethernet MAC, +Almost every microcontroller has at least one uart, and may not have Ethernet MAC, Wi-Fi or Bluetooth, but uart/serial null-modem connection is possible always and the most legitimate and proven protocol to deliver IP world is PPP. diff --git a/ports/README.md b/ports/README.md index 5b53dae5..eed35a9d 100644 --- a/ports/README.md +++ b/ports/README.md @@ -17,7 +17,7 @@ Following ports are possible to use: - `busybox` - application suite that provides several UN*X utilities, - `curl` - command-line tool for transferring data using various network protocols, -- `dropbear` - package that that provides SSH-compatible server and client, +- `dropbear` - package that provides SSH-compatible server and client, - `jansson` - library for encoding, decoding and manipulating JSON data, - `libevent`- library that provides asynchronous event notification, - `lighttpd`- web server optimized for speed-critical environments, @@ -32,7 +32,7 @@ Following ports are possible to use: - `wpa_supplicant` - Wi-Fi Protected Access client and `IEEE 802.1X` supplicant - [azure_sdk](azure_sdk.md) - Azure IoT C Software Development Kit - + ## See also diff --git a/ports/azure_sdk.md b/ports/azure_sdk.md index a277b1e9..87b95934 100644 --- a/ports/azure_sdk.md +++ b/ports/azure_sdk.md @@ -16,7 +16,7 @@ There are stored adaptations needed to run `azure-iot-sdk-c` on Phoenix-RTOS. Azure IoT C Software Development Kit provides the interface to communicate easily with Azure IoT Hub, Azure IoT Central, -and to Azure IoT Device Provisioning. It's intended for apps written in C99 (or newer) or C++. For more information +and Azure IoT Device Provisioning. It's intended for apps written in C99 (or newer) or C++. For more information please visit the [Azure IoT C SDK GitHub](https://github.com/Azure/azure-iot-sdk-c). ## Supported version @@ -168,7 +168,7 @@ You can read messages received from Azure, for example using `AzureIotHub VS Cod ## Using azure-iot-sdk-c The above guide shows how to run only one of the provided samples. To write your own programs using the SDK please read -the following instructions. It may be helpful for the other architectures, like `armv7m7-imxrt106x-evk`, where the +the following instructions. It may be helpful for other architectures, like `armv7m7-imxrt106x-evk`, where the previously generated sample may not work. That's the reason, why the following example is adjusted to the configuration with `mbedtls` intended for 'smaller' targets (now only the `imxrt106x` is supported). If you want to write your own programs intended for the `openssl` configuration ('larger' targets, like `ia32-generic-qemu`) there will be a few @@ -351,7 +351,7 @@ in the specific building script in `_projects` directory or using an environment To build `azure_sdk` tests please set `LONG_TEST=y` environment variable before calling `build.sh`. -In the result unit tests for the `c-utility` component should be placed in the `/bin` directory. +In the result, unit tests for the `c-utility` component should be placed in the `/bin` directory. The tests have `ut_exe` suffix, for example: `connectionstringparser_ut_exe`. You run it as follows: diff --git a/quickstart/README.md b/quickstart/README.md index b959b909..49acd1a8 100644 --- a/quickstart/README.md +++ b/quickstart/README.md @@ -1,6 +1,6 @@ # Running system on targets -This chapter presents how to run Phoenix-RTOS on supported targets. It is assumed that `phoenix-rtos-project` is built +This chapter presents how to run Phoenix-RTOS on supported targets. It is assumed that `phoenix-rtos-project` is built, and building artifacts are available in the `_boot` directory. The building process has been described in [phoenix-rtos-doc/building](../building/README.md). diff --git a/quickstart/armv7a9-zynq7000.md b/quickstart/armv7a9-zynq7000.md index 6cc83610..ab29972f 100644 --- a/quickstart/armv7a9-zynq7000.md +++ b/quickstart/armv7a9-zynq7000.md @@ -28,7 +28,7 @@ from the site below. - Phoenix-RTOS loader does not appear: - When booting using SD card: Make sure that a proper `BOOT.bin` file - is placed on the card, and it's in a binary format (right click → properties): + is placed on the card, and that it's in a binary format (right click → properties): diff --git a/quickstart/riscv64-generic-qemu.md b/quickstart/riscv64-generic-qemu.md index 58b4a6db..cf98814a 100644 --- a/quickstart/riscv64-generic-qemu.md +++ b/quickstart/riscv64-generic-qemu.md @@ -1,6 +1,6 @@ # Running system on `riscv64-generic-qemu` -This version is designated for RISC-V 64 processors based virt machine implemented by `qemu-system-riscv64`. +This version is designated for RISC-V 64 processors based virtual machine implemented by `qemu-system-riscv64`. To launch this version two files should be provided - kernel file integrated with SBI firmware with embedded UART16550 interface driver, dummyfs filesystem and the`psh` shell and disk image with ext2 filesystem. @@ -42,7 +42,7 @@ Firstly, you need to install QEMU emulator.
- How to get QEMU (Mac OS) + How to get QEMU (macOS) - Install the required packages diff --git a/quickstart/riscv64-generic-spike.md b/quickstart/riscv64-generic-spike.md index 986476a0..8177fc5f 100644 --- a/quickstart/riscv64-generic-spike.md +++ b/quickstart/riscv64-generic-spike.md @@ -86,7 +86,7 @@ Just like before, you first need to install the emulator.
- How to get QEMU (Mac OS) + How to get QEMU (macOS) - Install the required packages diff --git a/tests/README.md b/tests/README.md index 9d73a390..6b93bfeb 100644 --- a/tests/README.md +++ b/tests/README.md @@ -200,10 +200,11 @@ possible. ## Example 2: unit tests using C -In this section, we will explore another type of test: unit testing. When we talk about testing, unit testing often come -to mind, which is why the test runner has native support for unit testing. Similar to the previous section, we start by -creating a C file named `dummy.c` located in the `phoenix-rtos-tests/dummy` directory. To write unit tests, we will use -the modified [Unity Test](http://www.throwtheswitch.org/unity), a third party unit testing framework built for C. +In this section, we will explore another type of test: unit testing. When we talk about testing, unit testing often +comes to mind, which is why the test runner has native support for unit testing. Similar to the previous section, +we start by creating a C file named `dummy.c` located in the `phoenix-rtos-tests/dummy` directory. To write unit tests, +we will use the modified [Unity Test](http://www.throwtheswitch.org/unity), a third party unit testing framework +built for C. ```c #include @@ -478,12 +479,12 @@ Now let's go through the tests and try to understand the final configuration: - The `arg_zero` test specifies that the `test-hello-arg` executable should be executed without any arguments (`execute: test-hello-arg`). We provide the `hello_arg_harness.py` as the harness. In the `kwargs` section, we set -`argc` to `0`. This dictionary is passed later to the harness as `kwargs` parameter. Additional, we exclude the +`argc` to `0`. This dictionary is passed later to the harness as `kwargs` parameter. Additionally, we exclude the `armv7a9-zynq7000-qemu` target for this specific test. As a result, it will be run on the `ia32-generic-qemu` and `host-generic-pc` targets. - The `arg_two` test specifies that the `test-hello-arg` should be executed with two arguments: `arg1` and `arg2` (`execute: test-hello-arg arg1 arg2`). We provide the `hello_arg_harness.py` as the harness. In the `kwargs` section, we -set `argc` to `2`. Additional, we specify that this test should only run on the `ia32-generic-qemu` target. +set `argc` to `2`. Additionally, we specify that this test should only run on the `ia32-generic-qemu` target. - The `arg_hello` test specifies that the `test-hello-arg` executable should be executed with the argument `world`. We provide the `hello_arg_harness.py` as the harness. In the `kwargs` section, we set `input` to `Adios!`. This word will be used as the input to the `test-hello-arg`. We also set `nightly` to false for this specific test. Thanks to that, the diff --git a/usb/usbhost.md b/usb/usbhost.md index 0cd33273..bba75551 100644 --- a/usb/usbhost.md +++ b/usb/usbhost.md @@ -24,9 +24,9 @@ which it would then communicate in terms of scheduling transfers and detecting d Hubs are the basis of the USB devices tree. Each HCD has its own Root Hub with at least one port. Both Root Hubs and additional physical hub devices are managed using the hub driver, which is the only USB class driver implemented as a part of the USB stack, while other USB classes are implemented as separate processes. The hub driver is responsible for -managing port status changes, e.g. devices connection or disconnection. When a new device is connected the hub module +managing port status changes, e.g. device connection or disconnection. When a new device is connected the hub module performs the enumeration process and binds a device with appropriate drivers. On the device disconnection, it shall -unbind a device from drivers, destroy a device and all its resources. +unbind a device from drivers, and destroy a device and all its resources. ## Drivers @@ -57,7 +57,7 @@ device, e.g. `/dev/umass0`, `/dev/umass1`, `/dev/usbacm0`, etc. ## Pipes Pipes are a software abstraction of a USB endpoint. Drivers communicate with specific endpoints using pipes. A pipe is -characterized with a direction (in or out) and type (control, bulk, interrupt isochronous). A device driver first +characterized by a direction (in or out) and type (control, bulk, interrupt isochronous). A device driver first **opens** a pipe by sending a USB `open` message (implemented in `libusb` as `usb_open()` function). A driver gives details on a pipe it requests to open. If the USB Host stack finds an endpoint on a given device interface with given direction and type, it creates a pipe, allocates a `id` unique in the context of the driver and returns the ID to the