Compare-ObjectIs: No more weird Foreach… -Contains code

Yesterday, I was again faced with the task of using PowerShell to determine whether one array contained any of the values in another array. Specifically, I had an array of AD group Distinguished Names (DN) and needed to determine if users were members of any of these groups (an LDAP filter would probably be easier, but I was already invested in solving this). Typically, I would handle this with something of a foreach loop: for each user, loop through each of their group memberships and see if the group array contains their group string. This always feels terribly inefficient, so I wanted to find a cleaner way of handling these types of comparisons.

Looking around online, I realized PowerShell has a Compare-Object cmdlet, which sounded promising. It works by accepting a -ReferenceObject and -DifferenceObject, and comparing which values are the same or different between the two. Now, this cmdlet is almost helpful, but really works better for someone interacting with the shell, rather than a script. The output looks something like this:

Screen Shot 2016-05-19 at 7.43.10 AM

The “SideIndicator” tells us which object/array (the reference, or the difference object) has a different value. In this example, the second array contains “orange,” but the first array does not. Conversely, the  first array contains “apple,” but the second does not. Again, handy if you are in the shell, but how do you use this in a script. Well, here is the short of what I came up with:

compare-object $_.MemberOf $includeGroups -includeequal -excludedifferent

You might first notice that there are no “-ReferenceObject” or “-DifferenceObject” parameter names spelled out above. That is because, as with all PowerShell cmdlets, if you specify parameters in the right order, you can skip those names. So, in this case, $_.MemberOf is the reference object and $includeGroups is the difference object. The next two switches are very important for this to work. “-includeequal” tells the cmdlet to return the items that match between the two objects and “-excludedifferent” prevents it from returning the objects that are different. This is because, for this comparison, we really only care about the items that match across arrays.

Continuing the fruit example above, here is what we see:

Screen Shot 2016-05-19 at 7.52.05 AM

This “==” tells us that “pear” and “banana” exist in both arrays. Since we exclude differences, if there are no matches this cmdlet will return $null. That means we can do something like this:

if ( compare-object $MemberOf $includeGroups -includeequal -excludedifferent ) {

  #Do something



... | Where { compare-object $_.MemberOf $includeGroups -includeequal -excludedifferent }

Of course, format it however you would like and surround with parenthesis when using multiple conditions. I feel a little silly that this cmdlet has been there since PowerShell version 3, but I am at least satisfied that I no longer need to employ cumbersome foreach loops in these situations.



Who Isn’t Taking Out the Trash? Use WinDirStat and PowerShell to Find Out.

Using WinDirStat to find unnecessary files on a hard drive is a pretty routine task. A common find is that someone’s recycling bin has large zip or executable files. WinDirStat is helpful for showing this to you, but it only reveals the user’s local SID, such as:


It’s not terribly difficult to track down the associated profile using regedit. Still, clicking through a series of plus buttons in a GUI seems inefficient. Here is a simple method I used today to make this process a little quicker. Ok, so it took a bit longer than clicking through the first time, but it will be quicker for me next time:

((get-itemproperty "hklm:\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\*") | where {$_.pschildname -like "S-1-1-12-1234567890-123456789-123456789-123"}).ProfileImagePath

This will return the ProfileImagePath value, which is the file path to the guilty profile. If you want to cut straight to the username, try this:

(((get-itemproperty "hklm:\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\*") | where {$_.pschildname -like "S-1-1-12-1234567890-123456789-123456789-123"}).ProfileImagePath).split("\")[-1]

Quick Tip for Moving Exchange Logs

We recently had a RAID go bad that held the logs for one of our Exchange mailbox servers. It didn’t fail completely, so I was able to move logs off without much pain. Along the way, I found a way to cut downtime during the move process. The EMC’s method for moving logs seems to take an incredibly long amount of time, so it is quicker, in my experience, to copy the log files yourself. But if you really want to cut down the time the associated mailbox database will be offline, sort the log folder contents by date modified and copy all the older log files and other files not currently being used by Exchange. After that, dismount the database, copy the remaining files, move the storage group path (using the -configurationonly switch), and mount the database again. This can take downtime from several minutes to a few seconds. In short:

  • Sort the log folder by date modified
  • Select and copy all the log files not currently in use by the system to the new location
  • Dismount the mailbox database(s)
  • Copy the remaining log files, etc.
  • Run Move-StorageGroupPath with the -logfolderpath and -configurationonly switches
  • Mount the database(s)

jQuery getJSON from PowerShell via PHP on IIS: A Frustrating Gotcha

(Once you have your PHP on IIS environment setup and ready to go, you can check out this example code. You can also take a look at my new project, Gumper, on Bitbucket – an API for executing PowerShell and other scripts remotely.)

And in 40 years, this subject line will make absolutely no sense whatsoever…

About two years ago, I wrote a PowerShell script that generated a .Net form with all sorts of tools for our Help Desk. By searching for an Active Directory user, the Help Desk could instantly see if the user’s account or password was expired, whether or not the user had a mailbox, if the user’s mailbox database was mounted, quota information, and more. Recently, I revisited the script and decided it was time to take it to the web. Running these tasks from a PowerShell GUI is not terribly flexible or efficient. So, I set out to learn what it seems many Sys Admins like myself are learning: how to run PowerShell scripts via PHP.

Right out of the gate, I am already committing a bit of heresy. Rather than installing WAMP, I decided to stick with the IIS instance where I had already installed PHP ( And why use PHP to run Microsoft PowerShell (built on .NET)? Quite frankly, I like PHP and find it easy to learn. This project is still in its infancy, but there have already been a few important snags I thought worth sharing.

Use shell_exec()

If you want to launch a PowerShell script from a PHP file, you have a few options: exec(), shell_exec(), and system(). Using system() doesn’t seem to make sense if you intend to get a response from the server. This would be intended more for something like kicking off a scheduled task or another background process. Exec() will do the job, but it will split your response up into an array based on line breaks. This might be OK depending on how you want data returned. But, for my purposes, I chose shell_exec() so I could format the data a bit. Shell_exec() will return the output of your script as a string.

Keep your script silent

Note that “shell_exec()” returns all output from your script. That means errors, write-hosts, and everything else that pops up when you run a script. So, be sure to make your script as silent as possible and only “return” the little bit of data you want passed into PHP. This might mean a lot of Try/Catches (which is a good practice anyway).

Launch your script

This was actually a pretty easy one to conquer. Many people have examples of the basic syntax to use in order to launch a PowerShell script. Here is an example of what I am using:

shell_exec(“powershell -NoProfile -File $scriptPath $argString < NUL”)

(You may also need to pass the “-ExecutionPolicy” switch with a value depending on your setup.)

The most critical part of this is the “< NUL” bit at the end. Without it, PHP will never get the output from PowerShell and will, instead, wait and wait and wait. You will also notice that I use two variables: $scriptPath and $argString. These are PHP variables that I pass into a function and are used to call the script file along with any arguments. So, if $scriptPath is “C:\web\script1.ps1” and $argString is “-User jdoe”, the above line would render as:

shell_exec(“powershell -NoProfile -File C:\web\script1.ps1 -User jdoe < NUL”)


Remember that IIS has different options for authentication. I chose Basic with HTTPS, but someone else may have a better idea since I really just went with whatever worked first. The main thing is to turn off Anonymous authentication. The reason Basic works well for my situation is that each page runs as the authenticated user and their permissions. The importance of this will become evident below.

Enable fastcgi.impersonate

Even if you are authenticated, you probably won’t be able to launch any PowerShell scripts by default. This is because PHP does not pass along your authentication to the Windows command line unless fastcgi.impersonate is enabled. Enabling this in php.ini makes it so every PHP script that runs, runs as the authenticated user. Keep that in mind, because it may change how you design your site.

To enable fastcgi.impersonate, locate your PHP.ini file (Maybe C:\Program Files (x86)\PHP\v5.x\php.ini) and un-comment the line that says “;fastcgi.impersonate = 1;” by removing the semicolon (;) at the beginning of the line. The line should look like this when you are done:

fastcgi.impersonate = 1;

After this, save php.ini and restart your web site.

Return JSON

If you are going to manipulate the data inside the browser, it makes sense to return your data from PowerShell as JSON. This is pretty straightforward. For example:

$a = "apple"

$b = "banana"

$c = "coconut"

$json = "{`"a`": `"$a`",`"b`": `"$b`",`"c`": `"$c`"}"

Return $json

This should spit out the following:

{"a": "apple", "b": "banana", "c": "coconut"}

Be sure to wrap string in escaped quotations (`”) to keep your JSON valid. And validate your syntax with JSONLint (

Beware the Console Width

This is the real reason for writing; the thing that nearly made me go brain-dead. I chose to use jQuery’s getJSON function since I was already returning a JSON array. There are many great tutorials that show you how to accomplish this, so I won’t get into that. Despite all the great tutorials, I was getting nowhere. No matter what I did–callback to a function, change the mode to synchronous (WHAT!?), use $.ajax, click with my right pinky-finger–nothing worked. I could see the data in FireBug, but I could not get anything to show up on the page. This frustrated me all night. Today, I was watching the same data pass through FireBug, refreshing, clicking again, and feeling utterly hopeless (OK, so maybe there are more important things in life than PowerShell and PHP), when I finally realized something important: maybe FireBug wasn’t wrapping that really long Distinguished Name in the JSON array for easy reading. Maybe the data was coming back to the browser with a real line break, thereby invalidating the JSON.

Yup, that was it.

It turns out, when shell_exec() launches PowerShell, the data returned is formatted to the default console size: 80 characters wide. Meaning, if your JSON object goes beyond 80 characters, it will break to the next line of the console and the data you get back will be invalid. For example:

{"distinguishedName": "OU=weeny,OU=teeny,OU=bitsy,OU=itsy,DC=reallylongnamiccusd


Instead of…

{"distinguishedName": "OU=weeny,OU=teeny,OU=bitsy,OU=itsy,DC=reallylongnamiccusdomainiccus,DC=net"}

So, what are the options? Well, there are two I can think of. First, build well-formed line breaks into your JSON like so:

$a = "apple"

$b = "banana"

$c = "coconut"

$json = "{`n`"a`": `"$a`",`n`"b`": `"$b`",`n`"c`": `"$c`"`n}"

Return $json

Adding the new line character (`n) will cause the JSON to break in a spot that will not invalidate the array. This should return:


"a": "apple",

"b": "banana",

"c": "coconut"


Option 2 is to adjust the console size if your script will output more than 80 characters on a line. The “Hey, Scripting Guy!” blog has a great article on how to do this:

As I said earlier, this project is in its infancy and I am sure many more gotchas await me. However, this last one seemed like something others might run into. It’s so silly, but so frustrating. And, I imagine anything run from the command prompt will yield the same results.

Happy coding.

Fixing Blackboard’s Retention Center

(Keep in mind that Blackboard expects a fix for this in Learn 9.1 SP15)

Well, it’s upgrade week for us and our Blackboard Learn system. We went from 9.1 SP7 to SP11 yesterday with relatively few hiccups. One of the tools our Instructional Design team plans to take advantage of is the new Retention Center. This building block replaces the Early Warning System and gives instructors a graphical overview of student engagement. We tried working with it on our development server, but ran into a known issue whereby Retention Center becomes corrupt if you update it from Software Updates (again, fix not planned until SP15).

When we applied SP11 on our production server, Retention Center fared no better. Retention Center version 1.0 was installed by default. Despite showing as available in the Sys Admin panel, it was nowhere to be found inside course tools or the global menu. Installing the latest update only led to the “Corrupt” indicator appearing next to Retention Center and the Early Warning System still showed up inside courses. Pulling up the (MS)SQL, I noted the following tables were in the wrong schema:


If you are experiencing this same problem, you might see these tables prefixed with either “BBLEARN” or “BB_BB60.” The fix that worked for me was pretty simple:

  1. Update Retention Center
  2. Transfer the tables to the correct schema (in my case “dbo”)
  3. Uninstall Retention Center
  4. Reinstall Retention Center

The MSSQL syntax to transfer the schema was simply ‘ALTER SCHEMA dbo TRANSFER bblearn.table_name‘ for each table. Since I am both a PowerShell and Invoke-Sqlcmd fan, I used PowerShell to make things a little quicker:

$bbTables = @('ews_course_users',
ForEach ($t in $bbTables){Invoke-Sqlcmd -server server01\bbinstance -database BBLEARN -query "ALTER SCHEMA dbo TRANSFER bblearn.$t"}

This did the trick for us, though I am not a Bb engineer or a DBA, so proceed with caution if you intend to do likewise.

List a Printer in Active Directory Using a CNAME

(See also David’s comment below, which uses the native ActiveDirectory module)

This was a fun one for me…

One of the great questions we are dealing with now is how to make printers easy to locate on the network. Traditionally, users mapped a printer by UNC path only. This is not always helpful when identifying location and features. Although our printers are published in Active Directory, browsing the directory for a printer is not standard practice. I have to take some of the blame for this since I worked at our Help Desk for several years and reinforced this method. Now, I am trying to convince others that browsing the directory is much simpler and allows for greater freedom in printer naming (i.e.: including location information in the printer name is no longer critical).

However, after tackling how to list more than 20 directory printers in the first “Add Printer Wizard” screen of Windows 7 (, I unearthed a problem: When browsing the directory, Windows will map the printer via its print server’s Windows name, rather than a CNAME (or, alias). That is kind of a no-brainer when you consider how that information gets into AD from the server, but it is an important point to note. Why is this a problem? We use an alias for printer mappings so we can swap print servers without axing all printing. If everyone maps directly to the server’s Windows name, they will lose their mapping when that server is replaced. Other IT shops use DNS round-robin to load-balance print servers. Adding a printer from the browser negates that benefit.

There are two options at this point. First, you can choose to add the printers to AD manually. This allows you to modify the UNC path and server name without a fuss. However, it does not update features, model, location, etc. automatically. When you chose “List in the directory” from the print server, all of this is handled automatically. The second option, which I suggest, is to let the print server list your printers and update them for you. This, of course, requires a workaround for the alias.

The Quest for Option 2

Naturally, I hit the Googles. When a few searches turned up nothing, I had a realization: All of this information is in AD and has to be accessible. I turned to Quest’s AD cmdlets and began exploring the properties of the AD object known as a “printqueue”. I was surprised to find these objects nested within the print server’s container. Three properties carried importance to me: serverName, shortServerName, and uNCName (interesting capitalization, by the way). In the end, the only property in this list that determines the network path to the server is uNCName. But, the other properties are good to update for display purposes.

Using Quest Active Roles, I was able to update the relevant properties:

set-QADobject CN=SOMESERVER-SomePrinter,CN=SOMESERVER,OU=Servers,DC=domain,DC=com -objectattributes @{servername="";shortservername="prints";uNCName="\\\SomePrinter"}

You can also use the printer’s name instead of its distinguished name. For example, instead of “CN=SOMESERVER-SomePrinter,CN=SOMESERVER,OU=Servers,DC=domain,DC=com” you could simply use “SOMESERVER-SomePrinter”. In most organizations, this should be distinguished enough.

Directory-Wide Update Script

#Add the Quest Active Roles AD Management snapin but silently continue if it fails
 Add-PSSnapin Quest.ActiveRoles.ADManagement -EA 0

#Variables for the print servers' real names, based on DN
 $PrintServersDN = @("CN=SOMESERVER-SomePrinter,CN=SOMESERVER,OU=Servers,DC=domain,DC=com")
 $PrintServersDNS = @()

#Variables for the print server's CNAME/alias
 $PrintServerAlias = "prints"
 $PrintServerAliasDnsSuffix = ""
 $PrintServerAliasLong = "$PrintServerAlias.$PrintServerAliasDnsSuffix"

#Get all the print servers' FQDNs and add them to the $PrintServersDNS array
 ForEach ($s in $PrintServersDN){
 $serverDNS = (Get-QADComputer $s | select DnsName).DnsName.tolower()
 $PrintServersDNS += $serverDNS

$printers = get-qadobject -type printqueue -includeallproperties | where {$PrintServersDNS -contains $_.servername}

If ($printers -ne $null){

Foreach ($p in $printers){
 $printShareName = $p.printsharename
 $printerDN = $p.DN
 $uncName = "\\$PrintServerAliasLong\$printShareName"
 set-QADobject $printerDN -objectattributes @{servername="$PrintServerAliasLong";shortservername="$PrintServerAlias";uNCName="$uncName"}

Else{Write-Host "No changes to be made"}

I was excited by the results. The Add Printer Wizard picked up the new alias and mapped the printer according to the server’s alias. The downside? Every time I made a configuration change on the print queue, the server (as mentioned earlier) automatically updated AD with the print server’s Windows name. That’s when the wonderful Task Scheduler came to my rescue. I could simply setup a task triggered by 306 events in the Microsoft>Windows>PrintService>Operational log. But, this took a lot more study and brainstorming than I expected. And I learned a lot more about the Task Scheduler.

Here was the big question: “How can I run a script every time a printer’s configuration changes that will not have to update the whole directory every time?” As I considered triggering only one event that had a long enough pause to cover all changes in the even of a mass-update, I stumbled across this post: It shows how to create and pull variables from a scheduled task using Value Queries. This may not be news to you, but it was to me. Good news. I was able to write a script (shared below) that triggers at each printer configuration change and updates only that printer. Then, I hit another wall: a race condition. When the script fired right away, it would not detect any changes in AD because of either replication or a delayed write. Delaying the task by 30 seconds did the trick. It’s not perfect, but it does work. You can tweak your own settings. I just have to avoid changing two printers less than 45 seconds apart.

Single Update Script


#Add the Quest Active Roles AD Management snapin but silently continue if it fails
Add-PSSnapin Quest.ActiveRoles.ADManagement -EA 0

#Get the printer's share name (in case it is different than the printer's name)
$PrinterShareName = (Get-ItemProperty hklm:\system\currentcontrolset\control\print\printers\$PrinterName)."Share Name"

#Variables for the print server's real names
$PrintServer = (Get-Item env:computername).Value
$PrintServerDNS = (Get-QADComputer $PrintServer).DnsName

#Variables for the print server's CNAME/alias
$PrintServerAlias = "prints"
$PrintServerAliasDnsSuffix = ""
$PrintServerAliasLong = "$PrintServerAlias.$PrintServerAliasDnsSuffix"

#Find the printer object in AD
$PrinterADname = "$PrintServer-$PrinterShareName"
$PrinterADobject = Get-QADObject -Type printqueue $PrinterADname -IncludeAllProperties | where {$_.servername -eq $PrintServerDNS}

If ($PrinterADobject -ne $null){

$printerDN = $PrinterADobject.DN
$uncName = "\\$PrintServerAliasLong\$PrinterShareName"
set-QADobject $printerDN -objectattributes @{servername="$PrintServerAliasLong";shortservername="$PrintServerAlias";uNCName="$uncName"}
Write-Host "Done"

Else{Write-Host "No changes to be made"}

Please note, this script is intended for Server 2008. Server 2003 stores its printer elsewhere in the registry. While modifying the script to detect and respond to Server 2003 and Server 2003 x64, I realized it probably would not be helpful anyway as a triggered event.

To complete the process, I created a task (following the steps in the linked Technet blog above) which set “param1” to the task’s variable “$(param1)”. This was passed in as the “$PrinterName” parameter variable in the PowerShell script. Here is the Value Query I created in the task:

<Value name="param1">Event/UserData/PrinterSet/Param1</Value>

Further configurations included setting the task to delay itself by 30 seconds and not launch a new instance if triggered. Otherwise, you may get 20 parallel processes. That’s basically it:

  1. Create your script
  2. Create a scheduled task based on an event 306 in the PrintService>Operational log, delay it by 30 seconds, and make sure it passes the printer name parameter to your script

The directory-wide script is good for cleanup, but the single update script is better for regular updates. This should keep your directory up-to-date with your print server(s) CNAME.

Targeting Windows 7 with Group Policy: Why White Space Matters

This morning, I endeavored to do a simple thing with Group Policy: write a WMI filter that targeted Windows 7 clients. So, I double-checked the syntax required, and ran the following PowerShell command to get the right text:

get-wmiobject win32_operatingsystem | select caption

Simple enough. Here was my WMI query:

SELECT * FROM Win32_OperatingSystem WHERE Caption = “Microsoft Windows 7 Professional”

Result? “Filtering: Denied (WMI Filter)”

I can’t count the times I looked over the syntax, compared it with others online, checked the namespace, and ran gpresult. Finally, I came to a post on Technet:

The suggested answers just sent me through the same stuff I had already checked. Expecting the usual slew of “I’m having this problem too!” responses, I begrudgingly scrolled to the bottom. Surprisingly, I did not find the typical replies, but rather a response from the user “aac396”:

‘I just ran into the same problem and found the answer.  The WMI result of caption has a space at the end of Microsoft Windows 7 Enterprise.  My query was “Microsoft Windows 7 Enterprise” and it never worked.  I added a space at the end of “Microsoft Windows 7 Enterprise ” and it’s fine now.’

Note the trailing space after “Enterprise.” No one had marked this response as helpful, so I doubted it would actually work for me. Maybe it was one of those one-in-a-million solutions that everyone else raises their eyebrows at. But, quite to my satisfaction, the following query worked:

SELECT * FROM Win32_OperatingSystem WHERE Caption = “Microsoft Windows 7 Professional ”

I had to laugh. Suddenly, the other suggestion of searching “Microsoft Windows 7%” made perfect sense. And it is something that really would not stand out in a PowerShell window. So, when repeatedly traumatizing your head against a blunt object with WMI filtering, check for any white spaces Microsoft may have included as a bonus.