Note: the framework’s obsolete name as “device driver framework” is historical baggage from its first client component, imported device drivers. Today, a more accurate name would be the “OS Environment” framework. It provides the API and glue used by all “large” encapsulated components (devices, networking, filesystems) imported from other operating systems. We’ll change the name and documentation in the future.
A note on organization and content: this chapter really contains three quite separate parts: a general narrative about execution models, some very sketchy documentation of the “up-side” device interfaces, and the bulk covers the “osenv” interfaces. A later chapter (17) talks sketchily about the default implementation of the interfaces found here.
The OSKit device driver framework is a device driver interface specification designed to allow existing device drivers to be borrowed from well-established operating systems in source form, and used unchanged to provide extensive device support in new operating systems or other programs that need device drivers (e.g., hardware configuration management utilities). With appropriate glue, this framework can also be used in an existing operating system to augment the drivers already supported by the OS. (We believe it’s possible to extend the framework to accomodate drivers in binary form.) This chapter describes the device driver framework itself; other chapters later in this document describe specific libraries provided as part of the OSKit that provide driver and kernel code implementing or supporting this interface.
The primary goals of this device driver framework are, in order from most to least important:
Since the most important goal of this framework is to achieve wide hardware coverage by making use of existing drivers, and not to define a new model or interface for writing drivers, it is somewhat more demanding and restricting in terms of OS support than would be ideal if we were writing entirely new device drivers from scratch. Other device driver interface standards, such as SVR4’s DDI/DKI and UDI [1], are not designed to allow easy adaptation of existing drivers; instead, they are intended to define and restrict the interfaces and environment used by new drivers specially written for those interfaces, so that these new drivers will be as widely useful as possible. For example, UDI requires all conforming drivers to be implemented in a nonblocking interrupt model; this theoretically allows UDI drivers to run easily in either process-model or interrupt-model kernels, but at the same time it eliminates all possibility of adapting existing traditional process-model drivers to be UDI conformant without extensive changes to the drivers themselves. Hopefully, at some point in the future, one of these more generic device driver standards will become commonplace enough so that conforming device drivers are available for “everything”; however, until then, the OSKit device driver framework takes a compromise approach, being designed to allow easy adaptation of a wide range of existing drivers while keeping the primary interface as simple and flexible as possible.
Because the range of existing drivers to be adopted under this framework is so diverse in terms of the assumptions and restrictions made by the drivers, it would be impractical to define the requirements of the framework as a whole to be the “union” of all the requirements of all possible drivers. For example, if we had taken that approach, then the framework would only be usable in kernels in which all physical memory is directly mapped into the kernel’s virtual address space at identical addresses, because some drivers will not work unless that is the case. This restriction would make the framework completely unusable in many common OS environments, even though there are plenty of drivers available that don’t make the virtual = physical assumption and should work fine in OS environments that don’t meet that requirement.
For this reason, we have defined the framework itself to be somewhat more generic than is suitable for “all” existing drivers, and to account for the remaining “problematic” drivers, we make a distinction between full and partial compliance. A fully compliant driver is a driver that makes no additional assumptions or requirements beyond those defined as part of the basic driver framework; these drivers should run in any environment that supports the framework. A partially compliant driver is a driver that is compliant with the framework, except that it makes one or more additional restrictions or requirements, such as the virtual = physical requirement mentioned above. For each partially-compliant driver provided with the OSKit, the exact set of additional restrictions made by the driver are clearly documented and provided in both human- and machine-readable form so that a given OS environment can make use of the framework as a whole while avoiding drivers that will not work in the environment it provides.
In a typical OS environment in which all device drivers run in the kernel, Figure 8.1 illustrates the basic organization of the device driver framework.
The heavy black horizontal lines represent the actual interfaces comprising the framework, which are described in this chapter. There are two primary interfaces: the device driver interface (or just “driver interface”), which the OS kernel uses to invoke the device drivers; and the driver-kernel interface (or just “kernel interface”), which the device drivers use to invoke kernel support functions. The kernel implements the kernel interface and uses the driver interface; the drivers implement the driver interface and use the kernel interface.
Chapter 17 describes a library supplied as part of the OSKit that provides facilities to help the OS implement the kernel interface and use the driver interface effectively. Default implementations suitable in typical kernel environments are provided for many operations; the OS can use these default implementations or not, as the situation demands.
Several chapters in Part IV describe device driver sets supplied with the OSKit for use in environments supporting the OSKit device driver framework. Since the Flux project is not in the driver writing business, and does not wish to be, these driver sets are derived from existing kernels, either unchanged or with as little code modified as possible so that the versions of the drivers in the OSKit can easily be kept up-to-date with the original source bases from which they are derived.
Up to this point we have used the term “device driver set” fairly loosely; however, in the context of the OSKit device driver framework, this term has a very important, specific meaning. A driver set is a set of related device drivers that work together and are fairly tightly integrated together. Different driver sets running in a given environment are independent of each other and oblivious to each other’s presence. Drivers within a set may share code and data structures internally in arbitrary ways; however, code in different driver sets may not directly share data structures. (Different driver sets may share code, but only if that code is “pure” or operates on a disjoint set of data structures: for example, driver sets may share simple functions such as memcpy.)
Of course, the surrounding OS can maintain shared data structures in whatever way it chooses; this is the only way drivers in different sets can interact with each other. For example, if a kernel is using a FreeBSD device driver to drive one network card and a Linux driver to drive another, then the kernel can take IP packets coming in on one card and route them out through the other card, but the network device drivers themselves are completely oblivious to each other’s presence.
Some driver sets may contain only a single driver; this is ideal for modularity purposes, since in this case each such driver is independent of all others. Also, given some effort on the part of the OS, some multi-driver sets can be “split up” into multiple single-driver sets and used independently; Section 8.4.1 describes one way this can be done.
In essence, each driver set represents an “encapsulated environment” with a well-defined interface and a clearly-bounded set of state. The concept of a driver set has important implications throughout the device driver framework, especially in terms of execution environment and synchronization; the following sections describe these aspects of the framework in more detail.
Note that currently all “osenv” code in the same address space is essentially a single driver set. We are planning on changing this to allow drivers to be independant from each other. Currently, the only way to achieve this is to run them in separate address spaces.
Device drivers running in the OSKit device driver framework use the interruptible, blocking execution model, defined in Section 2.5, and all of the constraints and considerations described in that section generally apply to OSKit device drivers. However, there are a few execution model issues specific to device drivers, which are dealt with here.
In some situations, for reasons of elegance, modularity, configuration flexibility, robustness, or even (in some cases) performance, it is desirable to run device drivers in user mode, as “semi-ordinary” application programs. This is done as a matter of course by some microkernels. There is nothing in the OSKit device driver framework that prevents its device drivers from executing in user mode, and in fact the framework was deliberately designed with support for user-mode device drivers in mind.
Figure 8.2 illustrates an example system in which device drivers are located in user-mode processes. In this case, all of the code within a given driver set is part of the user-level device driver process, and the “surrounding” OS-specific code, which makes calls to the drivers through the driver interface, and provides the functions in the “kernel interface,” is not actually kernel code at all but, rather, “glue” code that handles communication with the kernel and other processes. For example, many of the functions in the driver-kernel interface, such as the calls to allocate interrupt request lines, will be implemented by this glue code as system calls to the “actual” kernel, or as remote procedure calls to servers in other processes.
Device driver code running in user space will typically run in the context of ordinary threads; the execution environment required by the driver framework can be built on top of these threads in different ways. For example, the OS-specific glue code may run on only a single thread and use a simple coroutine mechanism to provide a separate stack for each outstanding process-level device driver operation; alternately, multiple threads may be used, in which case the glue code will have to use locking to provide the nonpreemptive environment required by the framework.
Dispatching interrupt handlers in these user-mode drivers can be handled in various ways, depending on the environment and kernel functionality provided. For example, interrupt handlers may be run as “signal handers” of some kind “on top of” the thread(s) that normally execute process-level driver code; alternatively, a separate thread may be used to run interrupt handlers. In the latter case, the OS-specific glue code must use appropriate locking to ensure that process-level driver code does not continue to execute while interrupt handlers are running.
One particularly difficult problem for user-level drivers in general, and especially for user-level drivers built using this framework, is supporting shared interrupt lines. Many platforms, including PCI-based PCs, allow multiple unrelated devices to send interrupts to the processor using a single request line; the processor must then sort out which device(s) actually caused the interrupt by checking each of the possible devices in turn. With user-level drivers, the code necessary to perform this checking is typically part of the user-mode device driver, since it must access device-specific registers. Thus, in a “naive” implementation, when the kernel receives a device interrupt, it must notify all of the drivers hooked to that interrupt, possibly causing many unnecessary context switches for every interrupt.
The typical solution to this problem is to allow device drivers to “download” small pieces of “disambiguation” code into the kernel itself; the kernel then chains together all of the code fragments for a particular interrupt line, and when an interrupt occurs, the resulting code sequence determines exactly which device(s) caused the interrupt, and hence, which drivers need to be notified. This solution works fine for “native” drivers designed specifically for the kernel in question; however, there is no obvious, straightforward way to support such a feature in the driver framework.
For this reason, until a better solution can be found, the following policy applies to using shared interrupts in this framework: for a given shared interrupt line, either the kernel must unconditionally notify all registered drivers running under this framework, and take the resulting performance hit; or else the drivers running under this framework will not support shared interrupts at all. (Native drivers written specifically for the kernel in question can still use the appropriate facilities to support shared interrupt lines efficiently.)
Since this framework emphasizes breadth, adaptability, and ease-of-use over raw performance, the performance of device drivers running under this framework is likely to suffer somewhat; how much depends on how well-matched the particular driver is to the driver framework and to the host OS. Various factors can influence driver performance: for example, if the OS’s network code does not match the network drivers in terms of whether scatter/gather message buffers are supported or required, performance is likely to suffer somewhat due to extra copying between the driver and the OS’s network code. The OS developer will have to take these issues into account when selecting which sets of device drivers to use (e.g., FreeBSD versus Linux network drivers). If the device driver sets are chosen carefully and the OS’s driver support code is designed well, in many cases it should be possible to use these drivers with minimal performance loss.
Another consideration is how extensively the OS should rely on this device driver framework. There is nothing preventing the OS from maintaining its own (probably smaller) collection of “native” drivers designed and tuned for the particular OS; this way, the OS can achieve maximum performance for particularly common or performance-critical hardware devices, and use the larger set of device drivers easily available through this framework to provide support for other types of hardware that otherwise wouldn’t be supported at all. This approach of combining native and emulated drivers is likely to be especially important for kernels that are not well matched to the existing drivers this framework was designed around: e.g., “stackless” interrupt model kernels which must run emulated device drivers on special threads or in user space.
For a very rough idea of the performance of drivers and kernels using this framework, see the results in our SOSP’97 paper “The Flux OSKit: A Substrate for OS and Language Research.” Performance results for a related but less formal and less encapsulated framework can be found in the USENIX’96 paper “Linux Device Driver Emulation in Mach.”
When the host OS is ready to start using device drivers in this framework, it typically calls a probe function for each driver set it uses; this function initializes the drivers and checks for hardware devices supported by any of the drivers in the set. If any such devices are found, they are registered with the host OS by calling a registration routine specific to the type of bus on which the device resides (e.g., ISA, PCI, SCSI). The host OS can then record this information internally so that it knows which devices are available for later use. The OS can implement device registration any way it chooses; however, the driver support library (libdev) provided by the OSKit provides a default implementation of a registration mechanism which builds a single “hardware tree” representing all known devices; see Section 17.2 for more information.
When a device driver discovers a device, it creates a device node structure representing the device. The device node structure can be of arbitrary size, and most of its contents are private to the device driver. However, the first part of the device node is always a structure of type oskit_device_t, defined in oskit/dev/dev.h, which contains generic information about the device and driver needed by the OS to make use of the device. In addition, depending on the device’s type, there may be additional information available to the host OS, as described in the following section.
Device nodes have types that follow a C++-like single-inheritance subtyping relationship, where oskit_device_t is the ultimate ancestor or “supertype” of all device types.
In general, the host OS must know what class of device it is talking to in order to make use of it properly. On the other hand, it is not strictly necessary for the host OS to recognize the specific device type, although it may be able to make better use of the device if it does.
The block device class has the following attributes:
The character device class has the following characteristics:
The network device class has the following characteristics:
Note that it would certainly be possible to decompose these device classes into a deeper type hierarchy. For example, in abstract terms it might make sense to arrange character and network devices under a single supertype representing “asynchronous” devices. However, since the structure representing this “abstract supertype” would contain essentially nothing in terms of actual code or data, this additional level was not deemed useful for the driver framework. Of course, the OS is free to use any type hierarchy (or non-hierarchy) it desires for its own data structures representing devices, drivers, etc.
XXX overview
While asynchronous I/O is not directly suported by the OSKit device interface, it is possible to create an asychronous interface in the OS itself, which calls the blocking fdev functions.
XXX some rare, poorly-designed hardware does not work right if long delays occur while programming the devices. (This is supposedly the case for some IDE drives, for example.) For this reason, reliability and hardware compatibility may be increased by implementing osenv_intr_disable as a function that really does disable all interrupts on the processor in question.
XXX Symbol name conflicts among libraries. . . For each existing driver set, provide a list of “reserved” symbols used by the set.
XXX This should be moved somewhere else:
All functions may block, except those specifically designated as nonblocking.
All functions may be called at any time, including during driver initialization. In other words, all of the functionality exposed by this interface must be present and fully operational by the time the device drivers are initialized.
This section describes the OSKit device driver interfaces that are common to all types of drivers and hardware.
#include <oskit/dev/dev.h>
XXX
oskit_dev_init
oskit_X_init_X
oskit_dump_drivers
oskit_dev_probe
oskit_dump_devices
rtc_get and rtc_set interfaces (Real time clock).
The OS must provide routines for drivers to call to allocate memory for the private use of the drivers, as well as for I/O buffers and other purposes. The OSKit device driver framework defines a single set of memory allocation functions which all drivers running under the framework call to allocate and free memory.
Device drivers often need to allocate memory in different ways, or memory of different types, for different purposes. For this reason, the device driver framework defines a set of flags provided to each memory allocation function describing how the allocation is to be done, or what type of memory is required.
As with other aspects of the OSKit device driver framework, the libdev library provides default implementations of the memory allocation functions, but these implementations may be replaced by the OS as desired. The default implementations make a number of assumptions which are often invalid in “real” OS kernels; therefore, these functions will often be overridden by the client OS. Specifically, the default implementation assumes:
Additionally, the default routines which deal with physical memory addresses make these assumptions:
XXX typedef unsigned osenv_memflags_t;
All of the memory allocation functions used by device drivers in the OSKit device framework take a parameter of type osenv_memflags_t, which is a bit field describing various option flags that affect how memory allocation is done. Device drivers often need to allocate memory that satisfies certain constraints, such as being physically contiguous, or page aligned, or accessible to DMA controllers. These flags abstract out these various requirements, so that all memory allocation requests made by device drivers are sent to a single set of routines; this design allows the OS maximum flexibility in mapping device memory allocation requests onto its internal kernel memory allocation mechanisms.
Routing all memory allocations through a single interface this way may have some impact on performance, due to the cost of decoding the flags argument on every allocation or deallocation call. However, this cost is expected to be small compared to the typical cost of actually performing the requested operation.
The specific flags currently defined are as follows:
It is possible for the OS to implement these memory allocation routines so that they ignore the OSENV_AUTO_SIZE flag and simply always keep track of block sizes themselves. However, note that in some situations, doing so may produce extremely inefficient memory usage. For example, if the OS memory allocation mechanism prefixes each block with a word containing the block’s length, then any request by a device driver to allocate a page-aligned page (or some other naturally-aligned, power-of-two-sized block) will consume that page plus the last word of the previous page. If many successive allocations are done in this way, only every other page will be usable, and half of the available memory will be wasted. Therefore, it is generally a good idea for the memory allocation functions to pay attention to the OSENV_AUTO_SIZE flag, at least for allocations with alignment restrictions.
Component OS, Blocking
This function is called by the drivers to allocate memory. Allocate the requested amount of memory with the restrictions specified by the flags argument as described above.
XXX: While this is defined as blocking, the current glue code cannot yet handle this blocking, as it is not prepared for another request to enter the component. This will be fixed.
Returns the address of the allocated block in the driver’s virtual address space, or NULL if not enough memory was available.
Component OS, Blocking
Frees a memory block previously allocated by osenv_mem_alloc.
XXX: While this is defined as blocking, the current glue code cannot yet handle this blocking, as it is not prepared for another request to enter the component. This will be fixed.
Component OS, Nonblocking
Returns the physical address associated with a given virtual address. Virtual address should refer to a memory block as returned by osenv_mem_alloc. XXX does it have to be the exact same pointer, or just a pointer in the block? In systems which do not support address translation, or for blocks allocated with OSENV_VIRT_EQ_PHYS, this function returns va.
The returned address is only valid for the first page of the indicated block unless it was allocated with OSENV_PHYS_CONTIG. In a system supporting paging, the result of the operation is only guaranteed to be accurate if OSENV_PHYS_WIRED was specified when the block was allocated. XXX other constraints?
Returns the PA for the associated (wired) VA. XXX zero (or something else) if VA is not valid?
Component OS, Nonblocking
Returns the virtual address of an allocated physical memory block. Can only be called with the physical address of blocks that have been allocated with osenv_mem_alloc. XXX or else what?
XXX error codes?
XXX If the Linux glue uses this, and gets and error, should the physical memory be mapped (by the glue) (if it is not in the address space) and re-try?
Returns the VA for the mapped PA.
Component OS, Nonblocking
Returns the top of physical memory, which is noramlly equivelent to the amount of physical RAM in the machine. Note that memory-mapped devices may reside higher in physical memory, but this is the largest address normal RAM could have.
Returns the amount of physical memory.
Component OS, Blocking
Allocate kernel virtual memory and map the caller supplied physical addresses into it. The address and length must be aligned on a page boundary.
This function is intended to provide device drivers access to memory-mapped devices.
An osenv_mem_unmap_phys interface will likely be added in the future.
XXX: While this is defined as blocking, the current glue code cannot yet handle this blocking, as it is not prepared for another request to enter the component. This will be fixed.
Flags:
Returns 0 on success, non-zero on error.
This section is specific to ISA devices utilizing the Direct Memory Access controller.
If the OS wishes to support devices that utilize DMA, then basic routines must be provided to allow access to the DMA controller.
The Linux drivers directly access the DMA controller themselves, with macros and with embedded assembly. All devices that utilize the DMA controller must be in the same driver set, as there is not way to arbitrate between different driver sets. Because this shortcoming is in the encapsulated drivers, and would take significant effort to correct, we have not provided an interface to access the DMA controller, although we may in the future.
Component OS, Nonblocking
This requests a DMA channel.
If sucessfull, the driver must be able to directly manipulate the ISA DMA controller.
Returns 0 on success, non-zero if already allocated.
Component OS, Nonblocking
This releases a DMA channel. The DMA channel must have already been reserved by the driver.
Many devices have a concept of “I/O space”. In general, multiple devices cannot share the same range of I/O ports. Unfortunately, there are a few exceptions, most notably the keyboard and PS/2 mouse, and the Floppy and IDE controllers.
Many of the device drivers assume they may access port 0x80, for use in timing loops. This is not used in most computers, although POST cards are used to display the last value written to that port.
Component OS, Nonblocking
Returns true (nonzero) if the range is free; false (zero) if any ports in the range are already allocated.
Returns 0 (false) if any part of the range is unavailable, non-zero otherwise.
Component OS, Nonblocking
Returns 0 if the range is free, or an error code if any ports in the range are already allocated.
XXX: shared ports?
XXX: Default implementation panics if range is allocated.
Note: this is based on the assumption that I/O space is not mapped through the MMU. On a system where this is not the case (memory mapped I/O), osenv_mem_map_phys should be used instead.
Component OS, Nonblocking
Releases a range previously allocated. All ports in the range must have been allocated by the device.
Shared interrupts are supported, as long as OSENV_IRQ_SHAREABLE is requested by all devices wishing to use the interrupt.
In a given driver environment in this framework, there are only two “interrupt levels”: enabled and disabled. In the default case in which all device drivers of all types are linked together into one large driver environment in an OS kernel, this means that whenever one driver masks interrupts, it masks all device interrupts in the system.1
However, an OS can implement multiple interrupt priority levels, as in BSD or Windows NT, if it so desires, by creating separate “environments” for different device drivers. For example, if each driver is built into a separate, dynamically-loadable module, then the osenv_intr_ calls in different driver modules could be resolved by the dynamic loader to spl-like routines that switch between different interrupt priority levels. For example, the osenv_intr_disable call in network drivers may resolve to splnet, whereas the same call in a disk driver may be mapped to splbio instead.
Component OS, Nonblocking
Disable further entry into the calling driver set through an interrupt handler. This can be implemented either by directly disabling interrupts at the interrupt controller or CPU, or using some software scheme.
XXX Merely needs to prevent intrs from being dispatched to the driver set. Drivers may see spurious interrupts if they briefly cause interrupts while disabled.
XXX Timing-critical sections need interrupts actually disabled.
Component OS, Nonblocking
Enable interrupt delivery to the calling driver set. This can be implemented either by directly enabling interrupts at the interrupt controller or CPU, or using some software scheme.
Component OS, Nonblocking
Returns the driver’s view of the current interrupt status.
Returns non-zero if interrupts are currently enabled, zero otherwise.
Component OS, Nonblocking
Set the interrupt status to disabled and return the previous status.
This call is equivalent to calling osenv_intr_enabled and then calling osenv_intr_disable if the result was non-zero and is intended to optimize that common case.
Returns non-zero if interrupts are currently enabled, zero otherwise.
Component OS, Blocking
Allocate an interrupt request line and attach the specified handler to it. On interrupt, the kernel must pass the data argument to the handler.
XXX: interrupts should be “disabled” when the handler is invoked.
XXX: This has not been verified to function correctly if an incomming request is processed while this is blocked.
Flags:
Returns 0 on success, non-zero on error.
Component OS, Nonblocking
Removes the indicated interrupt handler. The handler is only removed if it was registered with osenv_irq_alloc for the indicated interrupt request line and with the indicated data pointer.
Component OS, Nonblocking
Prevents a specific interrupt line from delivering an interrupt. Can be done in software or by disabling at the interrupt controller.
If the interrupt does occur while disabled, it should be delivered as soon as osenv_irq_enable is called (see that section for details).
Component OS, Nonblocking
This allows the specified interrupt to be received, provided interrupts are enabled. (e.g., osenv_intr_enabled also returns true)
Component OS, Nonblocking
Determine if an interrupt is pending for the specified interrupt line.
Returns 1 if an interrupt is pending for the indicated line, 0 otherwise.
The current driver model only allow one thread or request into the driver set at a time. However, if the driver set is waiting for an external event and can handle another request while it is waiting, then the driver sleeps.
The default implementation of sleep busy-waits on the event, as it is not possible for it to do more without knowledge of the operating sysmte environment it is in.
Component OS, Nonblocking
This function initializes a “sleep record” structure in preparation for the current process’s going to sleep waiting for some event to occur. The sleep record is used to avoid races between actually going to sleep and the event of interest, and to provide a “handle” on the current activity by which osenv_wakeup can indicate which process to awaken.
Component OS, Blocking
The driver calls this function at process level to put the current activity (process) to sleep until some event occurs, typically triggered by a hardware interrupt or timer handler. The driver must supply a pointer to a process-private “sleep record” variable (sleeprec), which is typically just allocated on the stack by the driver. The sleeprec must already have been initialized using osenv_sleep_init. If the event of interest occurs after the osenv_sleep_init but before the osenv_sleep, then osenv_sleep will return immediately without blocking.
Returns the wakeup status value provided to osenv_wakeup.
Component OS, Nonblocking
The driver calls this function to wake up a process-level activity that has gone to sleep (or is preparing to go to sleep) waiting on some event. The value of wakeup_status is subsequently returned to the caller of osenv_sleep, making it possible to indicate various wakeup conditions (such as abnormal termination). It is harmless to wake up a process that has already been woken.
The device support code relies on the OS to provide timers to control events. Unfortunately, timers are in a state of flux, and there are currently too many ways to do almost the same thing. We will be cleaning this up.
Meanwhile. . . the interface provided by the host OS is currently at the osenv_timer layer. However, we plan on moving the abstraction layer down to a simple “PIT” interface. (The existing osenv_timer_pit code is similar to the planned interface).
When we move to an osenv_pit interface, the driver glue code will use an intermediate timer ‘device driver’ which will provide the higher-level functionality currently in the osenv_timer interface. The motivation for this is to make the OS-provided interface as simple as possible and to build extra functionality on top.
‘dev/clock.c’ is an example device driver built on the osenv_timer interface. It could be implemented on top of an osenv_pit interface as easily as on the osenv_timer interface.
The current implementation of the default osenv_timer code is based on the osenv_timer_pit interface. osenv_timer_pit is not currently defined as part of the osenv API, but merely exists for implementation convenience. However, over-riding the osenv_timer_pit implementation is probably the easiest way to provide a different implementation of the osenv_timer interface.
The default osenv_timer implementation also provides an osenv_timer_shutdown hook for use by the host operating system. osenv_timer_shutdown disables the osenv_timer.
Component OS, Nonblocking
XXX: Belongs in libdev.a section
Intiializes the timer code.
Component OS, Nonblocking
Requests that the function func gets called freq times per second.
XXX: Default implementation currently only works for freq equal to 100.
Component OS, Nonblocking
The function pointer and frequency must be identically equal to parameters on a previous osenv_timer_register call.
Component OS, Nonblocking
This allows a driver component to block for a specified amount of time (usually for hardware to catch up) without blocking. Unlike with osenv_sleep, this cannot give up the process-level lock.
All output goes throught the osenv_vlog interface.
The following log priorities are defined. From highest priority to lowest, they are: OSENV_LOG_EMERG, OSENV_LOG_ALERT, OSENV_LOG_CRIT, OSENV_LOG_ERR, OSENV_LOG_WARNING, OSENV_LOG_NOTICE, OSENV_LOG_INFO, and OSENV_LOG_DEBUG which correspond the the log priorities used by both BSD and Linux.
Component OS, Nonblocking
This is the output interface to the device driver framework. All output must go through this interface, so the OS may decide what to do with it.
Normal printf-type calls should get converted to the OSENV_LOG_INFO priority.
Component OS, Nonblocking
Front-end to osenv_vlog
Component OS, Nonblocking
This function should only be called if the device driver framework can no longer continue and cannot exit gracefully.
The driver’s ‘native’ panic calls will get resolved to this function call.
This should be provided by the OS to provide a graceful way of dealing with a situation that prevents the drivers from continuing.
Component OS, Nonblocking
Front-end to osenv_vpanic
Nothing here yet, sorry. See Section 17.2 for a tiny bit more information on our current default implementation of device registration. More information can be gained from the extensively commented header files in the directory <oskit/dev>, starting with file device.h.
This section is incomplete. Block device interfaces now provide an open method which returns a per-open blkio object through which block reads and writes are done. See Section 7.3. In the absence of other documentation, the example programs will be helpful.
XXX describe oskit_blkdev, blksize, etc.
XXX: This section is in severe need of an update.
Character device support is provided in the OSKit using device drivers from FreeBSD.
XXX: new device tree management
The address parameter is used to uniquely identify the device on the ISA bus. For example, if there are two identical NE2000 cards plugged into the machine, the address will be be the only way the host OS can distinguish them, because all of the other parameters of the device will be identical. If address is in the range 0-0xffff (0-65535), it is interpreted as a port number in I/O space; otherwise, it is interpreted as a physical memory address. For devices that use any I/O ports for communication with software, the base of the “primary” range of I/O ports used by the device should be used as the address; a physical memory address should be used only for devices that only communicate through memory-mapped I/O.