Compare-ObjectIs: No more weird Foreach… -Contains code

Yesterday, I was again faced with the task of using PowerShell to determine whether one array contained any of the values in another array. Specifically, I had an array of AD group Distinguished Names (DN) and needed to determine if users were members of any of these groups (an LDAP filter would probably be easier, but I was already invested in solving this). Typically, I would handle this with something of a foreach loop: for each user, loop through each of their group memberships and see if the group array contains their group string. This always feels terribly inefficient, so I wanted to find a cleaner way of handling these types of comparisons.

Looking around online, I realized PowerShell has a Compare-Object cmdlet, which sounded promising. It works by accepting a -ReferenceObject and -DifferenceObject, and comparing which values are the same or different between the two. Now, this cmdlet is almost helpful, but really works better for someone interacting with the shell, rather than a script. The output looks something like this:

Screen Shot 2016-05-19 at 7.43.10 AM

The “SideIndicator” tells us which object/array (the reference, or the difference object) has a different value. In this example, the second array contains “orange,” but the first array does not. Conversely, the  first array contains “apple,” but the second does not. Again, handy if you are in the shell, but how do you use this in a script. Well, here is the short of what I came up with:

compare-object $_.MemberOf $includeGroups -includeequal -excludedifferent

You might first notice that there are no “-ReferenceObject” or “-DifferenceObject” parameter names spelled out above. That is because, as with all PowerShell cmdlets, if you specify parameters in the right order, you can skip those names. So, in this case, $_.MemberOf is the reference object and $includeGroups is the difference object. The next two switches are very important for this to work. “-includeequal” tells the cmdlet to return the items that match between the two objects and “-excludedifferent” prevents it from returning the objects that are different. This is because, for this comparison, we really only care about the items that match across arrays.

Continuing the fruit example above, here is what we see:

Screen Shot 2016-05-19 at 7.52.05 AM

This “==” tells us that “pear” and “banana” exist in both arrays. Since we exclude differences, if there are no matches this cmdlet will return $null. That means we can do something like this:


if ( compare-object $MemberOf $includeGroups -includeequal -excludedifferent ) {

  #Do something

}

Or…

... | Where { compare-object $_.MemberOf $includeGroups -includeequal -excludedifferent }

Of course, format it however you would like and surround with parenthesis when using multiple conditions. I feel a little silly that this cmdlet has been there since PowerShell version 3, but I am at least satisfied that I no longer need to employ cumbersome foreach loops in these situations.

 

Who Isn’t Taking Out the Trash? Use WinDirStat and PowerShell to Find Out.

Using WinDirStat to find unnecessary files on a hard drive is a pretty routine task. A common find is that someone’s recycling bin has large zip or executable files. WinDirStat is helpful for showing this to you, but it only reveals the user’s local SID, such as:

S-1-1-12-1234567890-123456789-123456789-123

It’s not terribly difficult to track down the associated profile using regedit. Still, clicking through a series of plus buttons in a GUI seems inefficient. Here is a simple method I used today to make this process a little quicker. Ok, so it took a bit longer than clicking through the first time, but it will be quicker for me next time:


((get-itemproperty "hklm:\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\*") | where {$_.pschildname -like "S-1-1-12-1234567890-123456789-123456789-123"}).ProfileImagePath

This will return the ProfileImagePath value, which is the file path to the guilty profile. If you want to cut straight to the username, try this:


(((get-itemproperty "hklm:\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\*") | where {$_.pschildname -like "S-1-1-12-1234567890-123456789-123456789-123"}).ProfileImagePath).split("\")[-1]

Quick Tip for Moving Exchange Logs

We recently had a RAID go bad that held the logs for one of our Exchange mailbox servers. It didn’t fail completely, so I was able to move logs off without much pain. Along the way, I found a way to cut downtime during the move process. The EMC’s method for moving logs seems to take an incredibly long amount of time, so it is quicker, in my experience, to copy the log files yourself. But if you really want to cut down the time the associated mailbox database will be offline, sort the log folder contents by date modified and copy all the older log files and other files not currently being used by Exchange. After that, dismount the database, copy the remaining files, move the storage group path (using the -configurationonly switch), and mount the database again. This can take downtime from several minutes to a few seconds. In short:

  • Sort the log folder by date modified
  • Select and copy all the log files not currently in use by the system to the new location
  • Dismount the mailbox database(s)
  • Copy the remaining log files, etc.
  • Run Move-StorageGroupPath with the -logfolderpath and -configurationonly switches
  • Mount the database(s)

jQuery getJSON from PowerShell via PHP on IIS: A Frustrating Gotcha

(Once you have your PHP on IIS environment setup and ready to go, you can check out this example code. You can also take a look at my new project, Gumper, on Bitbucket – an API for executing PowerShell and other scripts remotely.)

And in 40 years, this subject line will make absolutely no sense whatsoever…

About two years ago, I wrote a PowerShell script that generated a .Net form with all sorts of tools for our Help Desk. By searching for an Active Directory user, the Help Desk could instantly see if the user’s account or password was expired, whether or not the user had a mailbox, if the user’s mailbox database was mounted, quota information, and more. Recently, I revisited the script and decided it was time to take it to the web. Running these tasks from a PowerShell GUI is not terribly flexible or efficient. So, I set out to learn what it seems many Sys Admins like myself are learning: how to run PowerShell scripts via PHP.

Right out of the gate, I am already committing a bit of heresy. Rather than installing WAMP, I decided to stick with the IIS instance where I had already installed PHP (http://blogs.iis.net/bills/archive/2006/09/19/How-to-install-PHP-on-IIS7-_2800_RC1_2900_.aspx). And why use PHP to run Microsoft PowerShell (built on .NET)? Quite frankly, I like PHP and find it easy to learn. This project is still in its infancy, but there have already been a few important snags I thought worth sharing.

Use shell_exec()

If you want to launch a PowerShell script from a PHP file, you have a few options: exec(), shell_exec(), and system(). Using system() doesn’t seem to make sense if you intend to get a response from the server. This would be intended more for something like kicking off a scheduled task or another background process. Exec() will do the job, but it will split your response up into an array based on line breaks. This might be OK depending on how you want data returned. But, for my purposes, I chose shell_exec() so I could format the data a bit. Shell_exec() will return the output of your script as a string.

Keep your script silent

Note that “shell_exec()” returns all output from your script. That means errors, write-hosts, and everything else that pops up when you run a script. So, be sure to make your script as silent as possible and only “return” the little bit of data you want passed into PHP. This might mean a lot of Try/Catches (which is a good practice anyway).

Launch your script

This was actually a pretty easy one to conquer. Many people have examples of the basic syntax to use in order to launch a PowerShell script. Here is an example of what I am using:

shell_exec(“powershell -NoProfile -File $scriptPath $argString < NUL”)

(You may also need to pass the “-ExecutionPolicy” switch with a value depending on your setup.)

The most critical part of this is the “< NUL” bit at the end. Without it, PHP will never get the output from PowerShell and will, instead, wait and wait and wait. You will also notice that I use two variables: $scriptPath and $argString. These are PHP variables that I pass into a function and are used to call the script file along with any arguments. So, if $scriptPath is “C:\web\script1.ps1” and $argString is “-User jdoe”, the above line would render as:

shell_exec(“powershell -NoProfile -File C:\web\script1.ps1 -User jdoe < NUL”)

Authenticate

Remember that IIS has different options for authentication. I chose Basic with HTTPS, but someone else may have a better idea since I really just went with whatever worked first. The main thing is to turn off Anonymous authentication. The reason Basic works well for my situation is that each page runs as the authenticated user and their permissions. The importance of this will become evident below.

Enable fastcgi.impersonate

Even if you are authenticated, you probably won’t be able to launch any PowerShell scripts by default. This is because PHP does not pass along your authentication to the Windows command line unless fastcgi.impersonate is enabled. Enabling this in php.ini makes it so every PHP script that runs, runs as the authenticated user. Keep that in mind, because it may change how you design your site.

To enable fastcgi.impersonate, locate your PHP.ini file (Maybe C:\Program Files (x86)\PHP\v5.x\php.ini) and un-comment the line that says “;fastcgi.impersonate = 1;” by removing the semicolon (;) at the beginning of the line. The line should look like this when you are done:

fastcgi.impersonate = 1;

After this, save php.ini and restart your web site.

Return JSON

If you are going to manipulate the data inside the browser, it makes sense to return your data from PowerShell as JSON. This is pretty straightforward. For example:


$a = "apple"

$b = "banana"

$c = "coconut"

$json = "{`"a`": `"$a`",`"b`": `"$b`",`"c`": `"$c`"}"

Return $json

This should spit out the following:


{"a": "apple", "b": "banana", "c": "coconut"}

Be sure to wrap string in escaped quotations (`”) to keep your JSON valid. And validate your syntax with JSONLint (http://jsonlint.com/).

Beware the Console Width

This is the real reason for writing; the thing that nearly made me go brain-dead. I chose to use jQuery’s getJSON function since I was already returning a JSON array. There are many great tutorials that show you how to accomplish this, so I won’t get into that. Despite all the great tutorials, I was getting nowhere. No matter what I did–callback to a function, change the mode to synchronous (WHAT!?), use $.ajax, click with my right pinky-finger–nothing worked. I could see the data in FireBug, but I could not get anything to show up on the page. This frustrated me all night. Today, I was watching the same data pass through FireBug, refreshing, clicking again, and feeling utterly hopeless (OK, so maybe there are more important things in life than PowerShell and PHP), when I finally realized something important: maybe FireBug wasn’t wrapping that really long Distinguished Name in the JSON array for easy reading. Maybe the data was coming back to the browser with a real line break, thereby invalidating the JSON.

Yup, that was it.

It turns out, when shell_exec() launches PowerShell, the data returned is formatted to the default console size: 80 characters wide. Meaning, if your JSON object goes beyond 80 characters, it will break to the next line of the console and the data you get back will be invalid. For example:


{"distinguishedName": "OU=weeny,OU=teeny,OU=bitsy,OU=itsy,DC=reallylongnamiccusd

omainiccus,DC=net"}

Instead of…


{"distinguishedName": "OU=weeny,OU=teeny,OU=bitsy,OU=itsy,DC=reallylongnamiccusdomainiccus,DC=net"}

So, what are the options? Well, there are two I can think of. First, build well-formed line breaks into your JSON like so:


$a = "apple"

$b = "banana"

$c = "coconut"

$json = "{`n`"a`": `"$a`",`n`"b`": `"$b`",`n`"c`": `"$c`"`n}"

Return $json

Adding the new line character (`n) will cause the JSON to break in a spot that will not invalidate the array. This should return:


{

"a": "apple",

"b": "banana",

"c": "coconut"

}

Option 2 is to adjust the console size if your script will output more than 80 characters on a line. The “Hey, Scripting Guy!” blog has a great article on how to do this: http://blogs.technet.com/b/heyscriptingguy/archive/2006/12/04/how-can-i-expand-the-width-of-the-windows-powershell-console.aspx

As I said earlier, this project is in its infancy and I am sure many more gotchas await me. However, this last one seemed like something others might run into. It’s so silly, but so frustrating. And, I imagine anything run from the command prompt will yield the same results.

Happy coding.

Fixing Blackboard’s Retention Center

(Keep in mind that Blackboard expects a fix for this in Learn 9.1 SP15)

Well, it’s upgrade week for us and our Blackboard Learn system. We went from 9.1 SP7 to SP11 yesterday with relatively few hiccups. One of the tools our Instructional Design team plans to take advantage of is the new Retention Center. This building block replaces the Early Warning System and gives instructors a graphical overview of student engagement. We tried working with it on our development server, but ran into a known issue whereby Retention Center becomes corrupt if you update it from Software Updates (again, fix not planned until SP15).

When we applied SP11 on our production server, Retention Center fared no better. Retention Center version 1.0 was installed by default. Despite showing as available in the Sys Admin panel, it was nowhere to be found inside course tools or the global menu. Installing the latest update only led to the “Corrupt” indicator appearing next to Retention Center and the Early Warning System still showed up inside courses. Pulling up the (MS)SQL, I noted the following tables were in the wrong schema:

ews_course_users
ews_gradebook_main
ews_note
ews_notif_attachment
ews_notif_recipient
ews_notification

If you are experiencing this same problem, you might see these tables prefixed with either “BBLEARN” or “BB_BB60.” The fix that worked for me was pretty simple:

  1. Update Retention Center
  2. Transfer the tables to the correct schema (in my case “dbo”)
  3. Uninstall Retention Center
  4. Reinstall Retention Center

The MSSQL syntax to transfer the schema was simply ‘ALTER SCHEMA dbo TRANSFER bblearn.table_name‘ for each table. Since I am both a PowerShell and Invoke-Sqlcmd fan, I used PowerShell to make things a little quicker:


$bbTables = @('ews_course_users',
'ews_gradebook_main',
'ews_note',
'ews_notif_attachment',
'ews_notif_recipient',
'ews_notification')
ForEach ($t in $bbTables){Invoke-Sqlcmd -server server01\bbinstance -database BBLEARN -query "ALTER SCHEMA dbo TRANSFER bblearn.$t"}

This did the trick for us, though I am not a Bb engineer or a DBA, so proceed with caution if you intend to do likewise.

List a Printer in Active Directory Using a CNAME

(See also David’s comment below, which uses the native ActiveDirectory module)

This was a fun one for me…

One of the great questions we are dealing with now is how to make printers easy to locate on the network. Traditionally, users mapped a printer by UNC path only. This is not always helpful when identifying location and features. Although our printers are published in Active Directory, browsing the directory for a printer is not standard practice. I have to take some of the blame for this since I worked at our Help Desk for several years and reinforced this method. Now, I am trying to convince others that browsing the directory is much simpler and allows for greater freedom in printer naming (i.e.: including location information in the printer name is no longer critical).

However, after tackling how to list more than 20 directory printers in the first “Add Printer Wizard” screen of Windows 7 (http://community.spiceworks.com/how_to/show/1374), I unearthed a problem: When browsing the directory, Windows will map the printer via its print server’s Windows name, rather than a CNAME (or, alias). That is kind of a no-brainer when you consider how that information gets into AD from the server, but it is an important point to note. Why is this a problem? We use an alias for printer mappings so we can swap print servers without axing all printing. If everyone maps directly to the server’s Windows name, they will lose their mapping when that server is replaced. Other IT shops use DNS round-robin to load-balance print servers. Adding a printer from the browser negates that benefit.

There are two options at this point. First, you can choose to add the printers to AD manually. This allows you to modify the UNC path and server name without a fuss. However, it does not update features, model, location, etc. automatically. When you chose “List in the directory” from the print server, all of this is handled automatically. The second option, which I suggest, is to let the print server list your printers and update them for you. This, of course, requires a workaround for the alias.

The Quest for Option 2

Naturally, I hit the Googles. When a few searches turned up nothing, I had a realization: All of this information is in AD and has to be accessible. I turned to Quest’s AD cmdlets and began exploring the properties of the AD object known as a “printqueue”. I was surprised to find these objects nested within the print server’s container. Three properties carried importance to me: serverName, shortServerName, and uNCName (interesting capitalization, by the way). In the end, the only property in this list that determines the network path to the server is uNCName. But, the other properties are good to update for display purposes.

Using Quest Active Roles, I was able to update the relevant properties:


set-QADobject CN=SOMESERVER-SomePrinter,CN=SOMESERVER,OU=Servers,DC=domain,DC=com -objectattributes @{servername="prints.domain.com";shortservername="prints";uNCName="\\prints.domain.com\SomePrinter"}

You can also use the printer’s name instead of its distinguished name. For example, instead of “CN=SOMESERVER-SomePrinter,CN=SOMESERVER,OU=Servers,DC=domain,DC=com” you could simply use “SOMESERVER-SomePrinter”. In most organizations, this should be distinguished enough.

Directory-Wide Update Script


#Add the Quest Active Roles AD Management snapin but silently continue if it fails
 Add-PSSnapin Quest.ActiveRoles.ADManagement -EA 0

#Variables for the print servers' real names, based on DN
 $PrintServersDN = @("CN=SOMESERVER-SomePrinter,CN=SOMESERVER,OU=Servers,DC=domain,DC=com")
 $PrintServersDNS = @()

#Variables for the print server's CNAME/alias
 $PrintServerAlias = "prints"
 $PrintServerAliasDnsSuffix = "domain.com"
 $PrintServerAliasLong = "$PrintServerAlias.$PrintServerAliasDnsSuffix"

#Get all the print servers' FQDNs and add them to the $PrintServersDNS array
 ForEach ($s in $PrintServersDN){
 $serverDNS = (Get-QADComputer $s | select DnsName).DnsName.tolower()
 $PrintServersDNS += $serverDNS
 }

$printers = get-qadobject -type printqueue -includeallproperties | where {$PrintServersDNS -contains $_.servername}

If ($printers -ne $null){

Foreach ($p in $printers){
 $printShareName = $p.printsharename
 $printerDN = $p.DN
 $uncName = "\\$PrintServerAliasLong\$printShareName"
 set-QADobject $printerDN -objectattributes @{servername="$PrintServerAliasLong";shortservername="$PrintServerAlias";uNCName="$uncName"}
 }
 }

Else{Write-Host "No changes to be made"}

I was excited by the results. The Add Printer Wizard picked up the new alias and mapped the printer according to the server’s alias. The downside? Every time I made a configuration change on the print queue, the server (as mentioned earlier) automatically updated AD with the print server’s Windows name. That’s when the wonderful Task Scheduler came to my rescue. I could simply setup a task triggered by 306 events in the Microsoft>Windows>PrintService>Operational log. But, this took a lot more study and brainstorming than I expected. And I learned a lot more about the Task Scheduler.

Here was the big question: “How can I run a script every time a printer’s configuration changes that will not have to update the whole directory every time?” As I considered triggering only one event that had a long enough pause to cover all changes in the even of a mass-update, I stumbled across this post: http://blogs.technet.com/b/otto/archive/2007/11/09/find-the-event-that-triggered-your-task.aspx. It shows how to create and pull variables from a scheduled task using Value Queries. This may not be news to you, but it was to me. Good news. I was able to write a script (shared below) that triggers at each printer configuration change and updates only that printer. Then, I hit another wall: a race condition. When the script fired right away, it would not detect any changes in AD because of either replication or a delayed write. Delaying the task by 30 seconds did the trick. It’s not perfect, but it does work. You can tweak your own settings. I just have to avoid changing two printers less than 45 seconds apart.

Single Update Script


Param(
[string]$PrinterName
)

#Add the Quest Active Roles AD Management snapin but silently continue if it fails
Add-PSSnapin Quest.ActiveRoles.ADManagement -EA 0

#Get the printer's share name (in case it is different than the printer's name)
$PrinterShareName = (Get-ItemProperty hklm:\system\currentcontrolset\control\print\printers\$PrinterName)."Share Name"

#Variables for the print server's real names
$PrintServer = (Get-Item env:computername).Value
$PrintServerDNS = (Get-QADComputer $PrintServer).DnsName

#Variables for the print server's CNAME/alias
$PrintServerAlias = "prints"
$PrintServerAliasDnsSuffix = "domain.com"
$PrintServerAliasLong = "$PrintServerAlias.$PrintServerAliasDnsSuffix"

#Find the printer object in AD
$PrinterADname = "$PrintServer-$PrinterShareName"
$PrinterADobject = Get-QADObject -Type printqueue $PrinterADname -IncludeAllProperties | where {$_.servername -eq $PrintServerDNS}

If ($PrinterADobject -ne $null){

$printerDN = $PrinterADobject.DN
$uncName = "\\$PrintServerAliasLong\$PrinterShareName"
set-QADobject $printerDN -objectattributes @{servername="$PrintServerAliasLong";shortservername="$PrintServerAlias";uNCName="$uncName"}
Write-Host "Done"
}

Else{Write-Host "No changes to be made"}

Please note, this script is intended for Server 2008. Server 2003 stores its printer elsewhere in the registry. While modifying the script to detect and respond to Server 2003 and Server 2003 x64, I realized it probably would not be helpful anyway as a triggered event.

To complete the process, I created a task (following the steps in the linked Technet blog above) which set “param1” to the task’s variable “$(param1)”. This was passed in as the “$PrinterName” parameter variable in the PowerShell script. Here is the Value Query I created in the task:


<ValueQueries>
<Value name="param1">Event/UserData/PrinterSet/Param1</Value>
</ValueQueries>

Further configurations included setting the task to delay itself by 30 seconds and not launch a new instance if triggered. Otherwise, you may get 20 parallel processes. That’s basically it:

  1. Create your script
  2. Create a scheduled task based on an event 306 in the PrintService>Operational log, delay it by 30 seconds, and make sure it passes the printer name parameter to your script

The directory-wide script is good for cleanup, but the single update script is better for regular updates. This should keep your directory up-to-date with your print server(s) CNAME.

Targeting Windows 7 with Group Policy: Why White Space Matters

This morning, I endeavored to do a simple thing with Group Policy: write a WMI filter that targeted Windows 7 clients. So, I double-checked the syntax required, and ran the following PowerShell command to get the right text:

get-wmiobject win32_operatingsystem | select caption

Simple enough. Here was my WMI query:

SELECT * FROM Win32_OperatingSystem WHERE Caption = “Microsoft Windows 7 Professional”

Result? “Filtering: Denied (WMI Filter)”

I can’t count the times I looked over the syntax, compared it with others online, checked the namespace, and ran gpresult. Finally, I came to a post on Technet: http://social.technet.microsoft.com/Forums/en/winserverGP/thread/0bca8962-cd35-48da-ace1-856b334a9d5c

The suggested answers just sent me through the same stuff I had already checked. Expecting the usual slew of “I’m having this problem too!” responses, I begrudgingly scrolled to the bottom. Surprisingly, I did not find the typical replies, but rather a response from the user “aac396”:

‘I just ran into the same problem and found the answer.  The WMI result of caption has a space at the end of Microsoft Windows 7 Enterprise.  My query was “Microsoft Windows 7 Enterprise” and it never worked.  I added a space at the end of “Microsoft Windows 7 Enterprise ” and it’s fine now.’

Note the trailing space after “Enterprise.” No one had marked this response as helpful, so I doubted it would actually work for me. Maybe it was one of those one-in-a-million solutions that everyone else raises their eyebrows at. But, quite to my satisfaction, the following query worked:

SELECT * FROM Win32_OperatingSystem WHERE Caption = “Microsoft Windows 7 Professional ”

I had to laugh. Suddenly, the other suggestion of searching “Microsoft Windows 7%” made perfect sense. And it is something that really would not stand out in a PowerShell window. So, when repeatedly traumatizing your head against a blunt object with WMI filtering, check for any white spaces Microsoft may have included as a bonus.

A Quicker PowerShell Form

What is the most controversial thing you can do with PowerShell? Build a GUI. Who wants a GUI when keeping it in the shell lets everyone walking past your cube make some comment about the Matrix? Here is the key: it’s not about you. Typically, when the need arises to create a GUI, it is so you can give someone else a tool that is easy to use but powerful (like PowerShell). My personal motivation was to provide a tool for our Help Desk.

In my previous post, I hinted at a follow-up. Here it is. Why was I trying to create a function that could name a variable and turn it into an object? Because creating GUIs from PowerShell is a mess. And that kind of makes sense all things considered. If you want to get a primer on creating .NET forms from PowerShell, check out http://blogs.technet.com/b/csps/archive/2011/12/07/guiapp.aspx. Following that link will put you face-to-face with code like this:


$objOutputBox = New-Object System.Windows.Forms.TextBox

$objOutputBox.Location = New-Object System.Drawing.Size(680,40)

$objOutputBox.Size = New-Object System.Drawing.Size(460,500)

$objOutputBox.Multiline = $True

$objOutputBox.Font = New-Object System.Drawing.Font("Courier New", "8.5")

$objOutputBox.Wordwrap = $True

$objOutputBox.ReadOnly = $True

$objOutputBox.ScrollBars = [System.Windows.Forms.ScrollBars]::Vertical

$objForm.Controls.Add($objOutputBox)

As with my “car” example in the previous post, this gets messy and repetitive. I found that when creating form objects, there were certain attributes I used over and over: type (button, label, etc.), size, location, parent (the object the new object would attach to), and text. Also explained in the previous post, creating a function to handle this was a bit arduous. But, here it is. A function to allow faster (and more readable in my opinion) .NET form creation in PowerShell:


Function New-FormObject ($varName,$parent,$type,$size,$location,$text) {

$object = New-Object System.Windows.Forms.$type
$object.Location = New-Object System.Drawing.Size($location)
$object.Size = New-Object System.Drawing.Size($size)
$object.Text = $text
New-Variable $varName -Value $object -Scope Global
If ($parent -ne $null) {
(get-variable $parent).Value.Controls.Add((get-variable $varName).value)
}

}

New-FormObject "myButton" "myTab1" "button" "75,30" "10,10" "Click Me!"

The above code creates the “$myButton” button, sets its size, location, text, and then adds it as a control to the “$myTab1” tab. There is also some logic that handles an object that will not be attached to a tab or some other object. This would be for creating another form (a pop-up perhaps?). For example:

New-FormObject "alertForm" $null "form" "400,200" "0,0" "Alert"

It is important to note the scope given to the “$myButton” variable. In this case it is global, but that is because I pulled it in from another script. Typically, you would only need to set the scope to script. The reason is that you want to manipulate and use the object outside of the function. If we create a text box using this function, we want to be able to pull text from it and put text into it later on.

The next important thing to note is “(get-variable $parent).Value.Controls.Add((get-variable $varName).value)”. As mentioned in the previous post (how many times will I use that line?), since we are passing variable names through the function, we have to modify them using Set- and New-Variable. To get data from the variables, we need to use Get-Variable. So, typing “(Get-Variable $parent)” is similar to typing “$myTab1”. Similar because, as you will note, there is this “value” business. When you “Get-” a variable, it returns the variable with two members: Name and Value. To really return the same thing as “$myTab1” you need to type “(Get-Variable $parent).value”. Otherwise, it is an apples and oranges type of situation.

Finally, a comment on the attributes. I chose to make the function accept a variable name, parent name, object type, size, location, and text. That is only because I use those attributes most commonly. You could easily add your own by including additional parameters. Too many parameters might start to get clunky, which is why I need to create a smarter function that can adapt based on the attributes sent to it. But, this has been a great start for me. The benefits have already paid off while reworking a GUI script nearly 1000 lines long.

PowerShell: Use a Function to Create a Variable

Anyone who has worked with PowerShell for more than five seconds knows (or anyone who hasn’t but has done any programming can probably guess) that creating a variable in PowerShell is simple:

$animal = “cat”

Amazing! You now have a variable that is equal to the string “cat”. So, if it is that simple, why on earth would you need a fancy and dandy function to do that for you? I can probably give a better reason in the post to follow, but to make it simple, what if you want a script to generate a reusable variable on the fly? The situation where this was applicable to me was while writing a script that needed to create several objects with very similar properties. If I wanted to create three objects, here might be the code:


$car1 = New-Object System.Object

$car1 | Add-Member -type NoteProperty -name Year -value 2009

$car1 | Add-Member -type NoteProperty -name Make -value "Chevrolet"

$car1 | Add-Member -type NoteProperty -name Model -value "Camaro"

$car2 = New-Object System.Object

$car2 | Add-Member -type NoteProperty -name Year -value 2011

$car2 | Add-Member -type NoteProperty -name Make -value "Ford"

$car2 | Add-Member -type NoteProperty -name Model -value "Mustang"

$car3 = New-Object System.Object

$car3 | Add-Member -type NoteProperty -name Year -value 1979

$car3 | Add-Member -type NoteProperty -name Make -value "Chevrolet"

$car3 | Add-Member -type NoteProperty -name Model -value "Impala"

Isn’t that rather repetitive and needlessly long? What if we could instead create a function that would reduce the above code to this:


New-Car "car1" 2009 "Chevrolet" "Camaro"

New-Car "car2" 2011 "Ford" "Mustang"

New-Car "car3" 1979 "Chevrolet" "Impala"

That would be so much quicker, easier to read, and just plain efficient. The idea is simple. You create a function that accepts input for the new object/variable’s name, the year, make, and model. The tricky part comes in when you try to get a function to create a variable based on the string value of another variable. How do you use ($varName = “car1”) to create the variable $car1?

You might think you could do something like this:


$varName = "car1"

$varName = "$" + "$varName"

At this point, $varName does equal “$car1”, but only as a string value. If you try setting the value of $car1 inside a function by using something like “$varName = New-Object System.Object” you are turning $varName into an object, not $car1. “Aha!” you might say, “I can put parenthesis around $varName!” Seems logical. In theory, “($varName) = New-Object System.Object” would render as “$car1 = New-Object System.Object”. Except that you may remember $varName only contains the value “$car1”, which is a string, not a variable. Now what?

Enter Set-Variable…

When you create a variable using Set-Variable or New-Variable, you do not name the variable with the traditional “$” in front. for example:


$color = "green"

is the same as

New-Variable -name color -value "green"

Both lines create the variable “$color” with the value “green”. How is this helpful? Consider the following:


Function New-Var ($name,$value){

New-Variable -name $name -value $value -scope script

}

New-Var color "green"

The “New-Var” function has just created the variable “$color” with the value “green”. And, because the variable scope is set to script, you can call and manipulate the new variable outside the function:


New-Var color "green"

If ($color -notlike "red"){$color = "red"}

Write-Host $color

Thrilling, right? Maybe not yet. So far, all we have done is create a function that takes a lot more time than using built-in functions to perform a very basic task. However, this truly becomes a handy trick when working with similar objects that contain multiple members, or properties. What will our New-Car function look like?


Function New-Car ($name,$year,$make,$model) {

New-Variable -name $name -value #Oh no! What do we put here without resorting to an array?

}

The New-Variable and Set-Variable cmdlets don’t readily lend themselves to adding on bits and pieces to an object. The easy way around this? Create a temporary object inside your function that will become the value of the resulting variable:


Function New-Car ($name,$year,$make,$model) {

$object = New-Object System.Object

$object | Add-Member -name year -type NoteProperty -value $year

$object | Add-Member -name make -type NoteProperty -value $make

$object | Add-Member -name model -type NoteProperty -value $model

New-Variable -name $name -value $object -scope script

}

Et voila! We create the object, give it the values we want to use for “$car1”, and then use the variable $object as the value for New-Variable. Note the “scope” in the New-Variable cmdlet. This is important if you want to manipulate or use the $car1 variable outside the function. And, after all, that is the only reason we would want to go through this mess.

Now that we have written this nice function that creates a variable based on the name and values we pass through as parameters, we can simplify our code to this:


New-Car car1 2009 "Chevrolet" "Camaro"

New-Car car2 2011 "Ford" "Mustang"

New-Car car3 1979 "Chevrolet" "Impala"

New-Car car4 1999 "Ford" "Escort"

New-Car car5 2007 "Kia" "Rio"

There you have it. 20 lines of code reduced to 5. And isn’t that easier to read. All thanks to using a function to create a variable.