The PowerShell-Haters Handbook
Contents
- Overview
- Operation
- Structure
- Terminology and grammar
- Syntax
- Behaviour
- User interface
- Library
- Other
- Appendix
Overview
PowerShell is the result of some crazed lunatics who thought that the optimal choice of language for lightweight scripting was the distilled essence of the most infamously intractible language on the planet, Perl. Perl is much easier to understand than it appears, until you start hitting your head on all manner of subtleties, such as the inability to tell barewords from constants in hash keys. Perl’s symbol overloading is so bad, that Perl resorts to guesswork to “do the right thing”, and this guesswork fails in a variety of obscure ways. Sane Perl code is easy to understand, but so much Perl library code seems to go out of its way to be as bizarre as possible, and Perl has no end of tricks to make programs as difficult to follow as it is possible without resorting to the likes of APL.
With PowerShell, the clue ought to be in the name. The objective seemed to be a simple, straightforward, readable language suitable for powerful but logical and easy-to-construct one-liners, and clear and maintainable script. That is not what happened.
PowerShell is a collection of everything that is wrong with Perl. It replicates and expands on Perl’s symbol overloading, to make it as awkward as possible to make sense of the syntax. It tries desperately to undermine decades of industry knowledge by deliberately choosing syntax forms that confuse anyone accustomed to industry-standard syntax such as C or BASIC. This would be excusable if the new syntax was clean and understandable, but the end result is to layer confusion onto ambiguity. Why are there two different array initialisation syntaxes? Why are hashes initialised using a semicolon as the delimiter? Since { … }
can be both a code block and a hash initialiser, the use of a semicolon delimiter means that { …; … }
is going to look like a series of statements in a block to anyone used to semicolon being a statement terminator.
There is a reason that languages such as BASIC and Pascal used such clear, readable syntax, and why BASIC has remained so popular. Keywords such as “THEN” and “END IF” clearly indicate where you stand in the code without depleting the painfully limited repertoire of symbols that can be typed on a keyboard (keyboards having never recovered from the decimation of typography brought about by typewriters).
As a Perl programmer, PowerShell should be easier to learn and work with, but it invariably proves more painful. PowerShell is clearly heavily inspired by Perl, but it misses the point spectacularly on so many counts. The commands are verbose, but the grammar is terse to the point of unreadable. It is a shell language that is too complicated to actually pull off one-liner commands without aggravation.
Note that this page is a work in progress, to be updated as more horrors come to light. Feel free to suggest more that I have yet to have had the misfortune to suffer through.
Operation
PowerShell is deliberately designed to prevent you from running scripts by double-clicking on them. Macintosh scripts (in AppleScript), Linux shell scripts and all prior Windows script types all allow you to double-click scripts to run them, but running PowerShell scripts is inconsistent. PowerShell scripts cannot be run directly by anything: you have to construct a special command line to invoke them. If Microsoft are concerned about execution of untrusted software, then crippling PowerShell is not the way to resolve the problem.
PowerShell joins MSI files as another type that cannot be run as an administrator. In fact, this is perfectly possible but just not implemented by default. For some reason, the main verbs are all missing from HKEY_CLASSES_ROOT\Microsoft.PowerShellScript.1 — all that you will find in there is the command to make double-click open in Notepad (and not ISE). The rest of the verbs are inside HKEY_CLASSES_ROOT\SystemFileAssociations\.ps1 and you can update this location to get “Run as administrator” back:
Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\SystemFileAssociations\.ps1\Shell\RunAs\Command] @="\"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" \"-Command\" \"if((Get-ExecutionPolicy ) -ne 'AllSigned') { Set-ExecutionPolicy -Scope Process Bypass }; & '%1'\""
The command above is the same awful mess used to launch PowerShell commands; using RunAs instead of Open for the verb triggers the User Account Control prompt and starts powershell.exe elevated. Doing the same to force MSI files to run elevated (e.g. when Software Restriction Policies are in force) didn’t work properly, if I remember correctly, although it should have.
(The Registry paths a couple of paragraphs up have zero-width spaces—U+200B—added to deal with text wrap in the browser, so copying and pasting them won’t work! The .reg file contents however are left unimpeded.)
Structure
PowerShell scripts require all subroutines to be defined before they can be called. This is nothing to do with the language being interpreted, because BBC BASIC from 1981 allowed forward references to named subroutines despite appearing to be a simple line-by-line interpreter. PowerShell’s design means that subroutine-heavy scripts have their entry point at a random and unpredictable location in the file. In order to examine the basic operation of a script, you have to consume time digging through pages of code to find the starting point, as it’s anywhere but the beginning. For a 21st century language, this is an egregious oversight.
Terminology and grammar
- The word “cmdlet”, whose inanity matches perfectly the insanity of a shell that takes a ludicrously long time to load off HDD, reminding you why UNIX was such a successful console-based OS (you should not need an SDD to load a shell at the same speed of a 1970s OS)
- “New” as a verb: the whole verb–noun concept is undermined by a basic inability to understand simple natural language grammar
- The verb–noun arrangement means that alphabetical command lists spanning more than one noun come out in close to a random order, with related commands—that share the same noun—scattered about (as seen in the Microsoft PowerShell module documentation)
- The verb–noun arrangement defeats any attempt at namespacing commands, forcing the namespace terms to come as noun modifiers even where the nouns are not part of the concept or product; for example,
Get-QADUser
tells you that it gets a “QAD” user, but it is actually a “Q” command to get an AD user - Get commands are normally singular, despite the fact that they return plural datasets; if we are going to thrash the hard drive for an eternity loading this monstrosity, it should at least manage to provide separate singular and plural commands that indicate whether the returned data is singular or plural, not least for code legibility (in Perl this is less important because array variables are marked differently: this is yet another case of copying only the worst aspects of Perl)
Syntax
- Some
-Filter
terms use normal modifiers (e.g.-eq
) alongside barewords (e.g.-Filter { Name -eq "Fred" }
), making them look like closures from some other language that does not use sigils; this makes filter syntax completely at odds with the rest of the language, to the extent that there is no apparent way that it would even parse.
Arrays
Understanding nested arrays is made difficult by the way that dumping an array hides any nesting:
> $a = 1, 2, (3, 4) > $a 1 2 3 4 > $a -join(", ") 1, 2, System.Object[] > $a[0] 1 > $a[2] 3 4
The flattening during output gives PowerShell the impression that it copied Perl’s list-flattening behaviour!
Appending a nested array requires some visually jarring and erroneous-looking syntax:
> $a = 1, 2, 3, 4 > $a += @(5, 6) # Result: $a contains a flat list: 1, 2, 3, 4, 5, 6 > $a = 1, 2, 3, 4 # Use a trailing comma to indicate that you are adding a sub-array: > $a += ,(5, 6) # Result: (5, 6) is now a nested list: 1, 2, 3, 4, (5, 6)
Behaviour
Void context
Every command executed in implied void context writes the output that would have gone into a variable, to the screen instead. When running a sequence of void instructions, every one has to be separately silenced if you are logging output or trying to write out meaningful output without pages of nonsense being thrown in your face. It seems that there is an explicit void context to actually prevent this, analagous to a (void) cast, which is insane.
Scripts should automatically suppress all output in void context, in a manner vaguely analogous to having @echo off
in batch files, but automatically.
However, PowerShell is not consistent. Creating a new directory vomits out irrelevant information, while Expand-Archive uses a progress readout and shows no permanent output:
PS C:\Users\Daniel\Downloads> Expand-Archive .\XM.zip PS C:\Users\Daniel\Downloads> mkdir example Directory: C:\Users\Daniel\Downloads Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 13/03/2022 21:02 example
Blink and it will look like Expand-Archive
did nothing.
Numeric agreement
Subroutines in computer software should behave according to their name. Subroutines that return data should indicate in their name what data will be returned. Untyped languages such as Perl and JavaScript are free, should they wish, to return different data types in different circumstances, but such practice should be curtailed within reason. (Perl CGI made a grievous error in allowing the param()
function to return either a single item or a list depending on context, which is insecure and dangerous.)
The caller should be fairly clear on whether they will be receiving a single item, or a collection of items. PowerShell however violates this principle, by using exclusively singular terms for cmdlet names even though many of them return lists of results. For example, Get-ChildItem
(gci
/dir
) clearly indicates from its name that it will return exactly one item, or conceivably $null on error. Instead, it will return a single item object if it found one item, or a list of items if it found more than one. As a result, storing the results into a variable results in behaviour that, while predictable, is easy to overlook and liable to result in bugs:
PS C:\Users\Public\Icons> dir S* -n Screenshot.ico SVG.ico PS C:\Users\Public\Icons> $a = dir S* -n PS C:\Users\Public\Icons> $a.Length 2 PS C:\Users\Public\Icons> $a[0] Screenshot.ico PS C:\Users\Public\Icons> dir B* -n BeebEm.ico PS C:\Users\Public\Icons> $a = dir B* -n # No we don’t have ten matches all of a sudden — this is the length in characters # of the filename “BeebEm.ico” PS C:\Users\Public\Icons> $a.Length 10 # The first array entry is simply the first character in the name: PS C:\Users\Public\Icons> $a[0] B # Now try it without the -n: PS C:\Users\Public\Icons> $a = dir B* PS C:\Users\Public\Icons> $a.Length # So, we have eighty-two thousand, seven hundred and twenty-six results now? 82726 # Yet “array” entry 0 exists when it shouldn’t: PS C:\Users\Public\Icons> $a[0] Directory: C:\Users\Public\Icons Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 26/05/2014 15:58 82726 BeebEm.ico # Try indexing into the rest of the array and nothing happens: PS C:\Users\Public\Icons> $a[1] PS C:\Users\Public\Icons> $a[82725] PS C:\Users\Public\Icons>
Suddenly, .Length
has changed from being the number of items found, to the length of the name of the single item found! The use of -n
above (equivalent to Command Prompt’s dir /b
) was to make the output terse; without it, .Length
gives you the length in bytes of the single file found. However, $a[0]
does still hand back the file object, while $a[1] to $a[$a.Length - 1] do nothing.
Untyped languages are not inherently wrong, but they place a much higher demand on visual explanation of behaviour, because there is no longer anything within the compiler to ensure that the programmer has understood the expected data types expected or returned. PowerShell is not even untyped: it’s a random mixture of typed and untyped designed to increase the bug count through maximum developer confusion. For most purposes, the command should be Get-ChildItems
, which will never lead to bugs if the number of items found does not match the programmer’s expectations. If there is truly a need to get a single item (the equivalent to stat
, for example), then a separate Get-ChildItem
would unambiguously return no more than one item and would not accept wildcards in the path presented. Attempting to use Get-ChildItem
with a wildcard path would throw an exception rather than return the first match, to avoid unexpected behaviour.
Pipelines
The strict adherence to classes instead of the flexibility of SQL JOIN or Perl hashes means that you often cannot create one-liner reports, because the object names to list by each result, are trapped in the previous pipeline stage where you cannot get at them. For example, being able to measure the size of each subdirectory in the working directory, as a table of directory names and sizes. The “Shell” in “PowerShell” suggests that you should be able to execute useful one-liner commands, but the language is too poorly-conceived for this to work in far too many cases.
That is, for Get-Mailbox | Get-MailboxStatistics
, it is not possible to output columns only returned by Get-Mailbox, as those are thrown away as Get-MailboxStatistics processes each object. Some people try to work around this by trying to remember fields during the pipeline, but none of the “solutions” they present actually work.
Error handling
It doesn’t.
Warnings
Warnings from compilers, frameworks and libraries exist to alert the programmer when they appear to have done something wrong or ill-advised, but where there is no proof of a mistake. This may include taking shortcuts that are legitimate but where the language runtime cannot distinguish intended shortcuts from actual bugs, such as processing undefined values and accessing nonexistent hash keys.
Warnings can be addressed using several methods:
- Do something else: replace the code that generates the warning with another approach that does not;
- Adjust your code to avoid the warning, which generally improves code robustness and reduces ambiguity;
- Suppress the specific warning or category of warnings, either globally, or within just the relevant block of code.
Perl does this right. Perl warnings always warn of avoidable situations, and warning categories can be selectively suppressed across the whole program or temporarily within a block of code. Perl warning behaviour is also specific to each Perl module. Some development environments number each warning so that individual warnings can be disregarded.
PowerShell warnings are useless. PowerShell has no warning classification: you cannot turn off a specific type of warning that is not relevant. PowerShell warns you about usage you are not even employing, and about what might happen if you use an option that you specifically selected knowing full well its pros and cons. The place for these messages is in the documentation, not the warning stream! For example, Search-Mailbox
warns:
The Search-Mailbox cmdlet returns up to 10000 results per mailbox if a search query is specified. To return more than 10000 results, use the New-MailboxSearch cmdlet or the In-Place eDiscovery & Hold console in the Exchange Administration Center
The problem is that this warning is issued even when -SearchQuery
is not specified! There is no option to remove the warning, because it does not relate to anything you did.
New-PSSession
warns:
PSSession … was created using the EnableNetworkAccess parameter and can only be reconnected from the local computer.
You don’t say. This characteristic is properly documented—just as is the limitation on -SearchQuery
—and thus if I choose to use this feature, why should I be subjected to a warning? This is not a warning about a mistake I made: it is a warning against using a feature with intended behaviour.
Worse, Disconnect-PSSession
issues this warning even when -EnableNetworkAccess
was not set! That makes the warning not only a nuisance, but deceitful.
Thus, PowerShell fails the useful mitigations:
- The warnings relate to valid actions and thus they cannot be avoided by better programming; they are not warnings about ambiguous instructions that might be erroneous.
- There is no warning classification or identification, and thus no suppression of specific warnings.
Constants
PowerShell ISE (which is ghastly) does not have the power to handle constants correctly. When executing a script in ISE that sets constants, those constants are defined not within the context of the script execution, but ISE itself. Should you be so silly as to run the script again, the constants are still there from the prior execution, and attempts to define them become attempts to redefine them, which results in one of PowerShell’s characteristic error tirades.
Properties
One of the more insidious ideas in object-oriented development is the idea of live properties: set a new value on a property, and a method is secretly invoked to apply that change. PowerShell is not consistent about how this is applied. Some objects returned (such as an Active Directory user) come back as cached records that you can play with as desired. Other PowerShell objects are live connections to the data source, and changing the PowerShell object will change the original object. There is nothing in the syntax or calling convention that indicates whether you have live data or not, and no consistency about when you might possess live data. It is safer not to use live properties in a shell script: it just adds extra burden to the workload of a system administrator whose job is to maintain systems, not write complex software. (It is safest never to implement live properties at all.)
Stateless commands versus objects
PowerShell is inconsistent about whether it wants to be a stateless command repertoire or a pure object-oriented API. For example, supposing you wanted to enable the PrintService\Operational log, which for some reason is disabled out of the box. The “correct” approach would be the following stateless command:
Set-WindowsEventLog Microsoft-Windows-PrintService/Operational -Enabled:$true
Of course, no such command exists. PowerShell Magazine demonstrates that it must be done with this long-winded code:
$log = New-Object System.Diagnostics.Eventing.Reader.EventLogConfiguration Microsoft-Windows-PrintService/Operational $log.IsEnabled = $true $log.SaveChanges()
Presumably the above example, from 2013, pre-dates the ability to do this:
$log = Get-WinEvent -ListLog Microsoft-Windows-PrintService/Operational $log.IsEnabled = $true $log.SaveChanges()
Note how (as noted later) Get-WinEvent
does not actually return a single event; here it returns event log configuration but it can return all manner of other record types. There is not necessarily anything wrong in principle with the above approach to applying changes, but it’s simply not consistent with other aspects of PowerShell. The New-Object
approach demonstrates something that is achievable through basing PowerShell on .NET: the abillity to perform tasks for which there is not yet any native PowerShell command. On the other hand, this only adds to PowerShell’s complexity and inconsistency. The Get-WinEvent
cmdlet—which should have been at least half a dozen separate cmdlets—should have introduced a proper method of saving changes, but instead, the incongruent object-based approach remains.
You can also set the IsEnabled property this way, for extra inconsistency:
$log.set_IsEnabled($true)
Some examples do use this approach instead. A reflection-based syntax would have some kind of merit, but why have a dedicated function that is directly equivalent to a property?
(Also, stop making up stupid words like “eventing”. “Error” is not a verb, and neither is “event”, nor does it make the tiniest bit of sense for it to be one.)
User interface
Tab complete does not make sense. When using wildcards to complete command names, they match incorrectly: “A*B” matches “ABCD”. There must be a special award for an organisation that has managed to get something as simple as a basic wildcard behaviour wrong.
Also, the Verb–Noun ordering means that typing the noun and pressing the tab key does not cycle through commands related to that noun, as the noun comes last. If you forget whether you were meant to use “Create-Foo”, “New-Foo” or “Make-Foo”, you cannot type “Foo” then press tab. (Here, you would want “*Foo” except that would also match “*FooBar” because of the incorrect wildcard expansion.)
Why does | ft -autosize
have to be specified manually to get it to actually figure out how wide to format a table? It’s a computer, it can figure this stuff out by itself by now, right? There is a legitimate reason for this one: in order to format the table, PowerShell needs to know the size of every “cell” in the output, and this means that progressive output isn’t possible. The problem is that it pathologically makes columns too narrow. This is not an easy problem to solve, however, but it just adds to the time consumption of even the simplest of tasks. Additionally, it has a habit of giving you the most useless subset of columns for any object: you have to use select
to request the useful ones, after carefully scrutinising the objects in fl
to figure out what they happened to be called. Does a reference to a file use FullName or FullPath? Depends who wrote the cmdlet of course.
Library
Sessions
PSSession commands are split into at least two modules, so on the Microsoft documentation website they are not even all present in the same list of commands
Registry
PowerShell provides the ability to browser the Registry via the command line as though it were a directory tree. Why on earth would anyone want to do that? It’s insane. The end result is that Registry paths in PowerShell do not match those used everywhere else, including the long-awaited address bar in Registry Editor, rendering copy and paste of Registry paths inoperable. UNIX’s “everything is a file” works as (dubiously) well as it does because it was a fundamental part of the OS. PowerShell is too late to pretend that everything in Windows is a file. It isn’t.
File handling
Writing to a log file can be done with the stateless instruction Add-Content
. This brain-dead approach opens and closes the target file with every invocation of Add-Content
. It is not safe to have two Add-Content instructions in succession, as the second one can hit a sharing violation trying to re-open the file that hasn’t fully closed from the previous append request, such as when the target file is located on a network share. You’d think that the idea of working with files was new to the PowerShell designers.
Event log handling
Get-WinEvent
has multiple return types, none of which is a single event. In an attempt to get away from the already confusing and largely useless Get-EventLog
(which only understands the classic logs), the new command has a name that no longer makes any sense. There is no logic to strictly enforcing a verb–noun naming convention if you are not prepared to implement commands that do as their own name indicates. Get-EventLog
is a huge catch-all function that returns a variety of record sets including event logs, event log providers and event log IDs. Every one of these output modes should be a separate command whose return type is indicated by its name.
File downloads
PowerShell cmdlet Invoke-WebRequest
allows one to download files from the Web. This cmdlet has been aliased to wget
in what appears to be a cruel joke.
Downloading a file is not straightforward. The user agent is required to choose a name for the file. This name by default is the final portion of the URL (ignoring the anchor and query string parts). The webserver is also able to specify what name to use. When you use the genuine wget for Windows the specified URL is downloaded to disk exactly as you would expect. Invoke-WebRequest
however requires you to figure this out for yourself and choose a name; if no filename is given, the file is downloaded into RAM instead. Having a cmdlet named Invoke-WebRequest
with no means to save the resulting file is not wrong in itself, although it’s unfortunate that Microsoft failed to recognise the usefulness of such a tool. However, aliasing this command to wget
so that Windows now has a “wget” command that doesn’t actually do what wget does is idiotic. (The workarounds to get “wget” to download files are tedious.)
Other
The Windows Command Prompt does not support having a UNC path as the working directory. PowerShell does, but in a bizarre manner. The end result is a prompt such as:
PS Microsoft.PowerShell.Core\FileSystem::\\SOME-SERVER\ShareName>
What benefit is there to anyone for such an a painfully long prompt? Why are UNC paths not simply native and natural by this point?
Appendix
The following is a genuine Microsoft Exchange tip of the day. It is difficult to be determine whether or not this was meant to be a joke.
Tip of the day #85: Wondering how many log files are generated per server every minute? Quickly find out by typing: Get-MailboxDatabase -Server $env:ComputerName | ?{ %{$_.DatabaseCopies | ?{$_.ReplayLagTime -ne [TimeSpan]::Zero -And $_.HostServerName -eq $env:ComputerName} } } | %{ $count = 0; $MinT = [DateTime]::MaxValue; $MaxT = [DateTime]::MinValue; Get-ChildItem -Path $_.LogFolderPath -Filter "*????.log" | %{ $count = $count + 1; if($_.LastWriteTime -gt $MaxT){ $MaxT = $_.LastWriteTime}; if($_.LastWriteTime -lt $MinT){ $MinT= $_.LastWriteTime} }; ($count / ($MaxT.Subtract($MinT)).TotalMinutes) } | Measure-Object -Min -Max -Ave