If, from reading these notes, you conclude that I am off my rocker, you won’t be the first, and you may even be right.
No doubt there is a dozen and one reasons why none of this would ever work, but perhaps somewhere deep down there is a tiny fragment that could be used for something.
The notion of an API is somewhat tied to the idea of a monolithic kernel: context switches in and out of kernel space to make requests of a large collection of routines.
With a lot of system functionality divested into manager processes, many tasks will be achieved by message passing. The question raised is less a case of the API type and more a case of ensuring that maximum efficiency is achieved here. For example, activities like moving and scrolling windows, drawing window content, delivering audio and reading and writing files must involve the least overhead.
Windows is a perfect example of how not to handle an API. Windows processes primarily communicate with the OS by linking DLLs containing the communication logic. However, Microsoft decided that graphical applications should be constructed with the Microsoft Foundation Classes, the start of their rollercoaster ride of changing their mind on this. Graphical interprocess communication to what little extent was possible had Dynamic Data Exchange, a strange attempt to piggyback on the window system messages. On top of this you have Windows Management Instrumentation, which is not just readouts (as the name kind of suggests) but an in-depth API to the computer. PowerShell was then added, which became an API in its own right with Exchange Management Console (not to be confused with Exchange Management Shell, thanks to some terrible naming) communicated with Exchange by delivering PowerShell commands instead of proper message passing. WMI is painfully poorly documented, and being object-based and polymorphic means that you end up receiving objects that in a scripting language you cannot identify by any means. WMI does however allow requests to be sent to other systems, unlike the conventional API. PowerShell itself is a bizarre muddle of ideas and paradigms, in a misguided attempt to mash together the PowerShell API itself (the cmdlets) with the .NET framework. PowerShell’s own API is a UNIX-like shell syntax, while .NET is object oriented, and you bounce backwards and forwards between two paradigms with completely different syntax. (PowerShell syntax itself is an attempt to distill the very worst of Perl.) PowerShell also allows requests to be issued to other machines, but in the most painful ways possible.
Compare the classic Mac design. Foremost, classic Mac OS has a conventional API. Above this is the Apple Event system which allows structured, high-level messages to be passed between processes and between computers. Apple Events are in a binary format with 32-bit identifiers for message type and parameter IDs. By way of a scripting dictionaries, users can write scripts (typically in AppleScript) that are automatically translated into binary messages. Standard functionality is included directly in system libraries, and by way of scripting dictionaries incorporated into applications, each application can have its own message repertoire to allow it to be scripted.
AppleScript and PowerShell have a fundamental difference: AppleScript is for controlling desktop applications, while PowerShell is for controlling systems. Neither one is suitable for the other task. AppleScript can be used as an event handler inside a graphical application, a drag-and-drop file processor and for automating tasks within an application, such as batch e-mail processing. Notwithstanding the various built-in commands, AppleScript can only communicate with an active process, so extra functionality involved launching a hidden background application.
To the maximum extent possible, we should be striving for uniformity and orthogonality. The events page describes a structured message-passing system for communication between applications and the system. This process is uniform and orthogonal: all communication uses the same message system (uniform) and is orthogonal (local delivery versus network delivery is uniformly available to all messages as part of the design.
This system could easily be made extensible for controlling desktop applications by implementing scripting/event dictionaries, both interactively, and non-interactively by using helper applications that are launched to receive the events.
The only concern is that a high-level message passing system is not suitable for high-performance tasks, and this results in a break of uniformity. This is likely to be inevitable, but the question is left as to where the split occurs. If we do use a full-on microkernel then we are going to end up sending messages anyway, so the question is likely to end up being a matter of different message formats, and to what extent the network delivery system has coverage.
Ideally it should be possible to access the API with the minimum of effort. For example, the Windows command to lock the workstation is as follows:
Here, Rundll32.exe is being used to invoke a Windows API call directly, which is ugly. Rundll32.exe is a badly-designed hack that should not exist; individual utilities that need to be run from inside libraries should be proper helper stub applications. (Classic Mac OS turned control panels into standalone applications by dynamically “manufacturing” a helper process when a control panel is launched, and the resulting process has the name and icon of the control panel.)
The correct approach here would be:
# sendSync() is local delivery only, synchronous # return parameter is ignored MessageLib::sendSync(VIRTUAL_PROC_SESSION, SessionManager::MESSAGE_LOCK);
For both scripting languages and compiled languages, one could argue that helper wrappers around each message type would be beneficial, just for readability. After all, the above code is actually longer than the Windows command.