ad_plugin

Reversing the AD Plugin UID algorithm

Background:
I’ll start by saying we have a rather large AD with over a million users. #humblebrag

I had a ticket escalated to me that was quite odd. A user had logged into their Mac, but a number of applications and command line tools were reporting a different user account.

After a bit of trouble shooting and I found that both of these users had the same UID on the Mac. So I started to dig into how this could be and how the AD Plugin in generates its UID. The post below is a result of that work.

I think it is pretty well known that the AD Plugin uses the first 32 Bits of the 128 Bit objectGUID in AD to determine the UID on the Mac.

But after a bit more digging it turns out its not quite that simple.

I’ll work with a few examples here and show you how the AD Plugin determines the UID that will be used and provide a script that will allow you to determine the UID of your users accounts in AD. From here you can check to see if you have any UID collisions.

First lets start with a user in AD.

If we inspect the users account record in AD with something like Apache directory studio, we can see the objectGUID which is a 128 bit hex string.

For example:
screen-shot-2016-11-29-at-4-50-30-pm

Here we can see an objectGUID with a value of:

6C703CF1-B5D1-41F8-880B-317728CBD4F5

Now the AD Plugin will read this value and take the first 32 Bits which is: 6C703CF1

It then converts this hex value to decimal. This can be achieved by using the following command:

echo "ibase=16; 6C703CF1" | bc

Which will return: 1819294961

Now you might say, well thats easy!

And I thought so too. But theres a slight issue with this. The UID on the Mac has a maximum decimal value as it is a 32 Bit integer. So the maximum number the UID can be is: 2147483647

In this example the hex value 6C703CF1 converts nicely into a 32 Bit integer (as in, its value is less than or equal to 2147483647) and so can be used as the Mac UID without any further work.

But lets look at another example:

screen-shot-2016-11-29-at-4-56-01-pm

Here we can see an objectGUID with a value of:

BEB08781-0DAF-4B12-9EB6-AF33CBA90876

Now if we do our conversion on this as we did before:

echo "ibase=16; BEB08781" | bc

We end up with a result of: 3199240065

Unfortunately this number is larger than the maximum 32 Bit integer allowed by the Mac UID.

So what do we do?

Turns out Apple use a table to convert the first character of these 32 Bit GUID’s into a number and then recalculate the UID based on this new 32 Bit GUID.

For example with the GUID above, BEB08781, we take the first character B and replace it with the number 3 to end up with: 3EB08781

Now when we do the conversion:

echo "ibase=16; 3EB08781" | bc

We get a value of: 1051756417

Which fits perfectly into our 32 Bit integer Mac UID.

The table of conversion looks like this:

Building a script
So now we know this, how do we build a script to do this conversion for us?

Interacting with records in AD is usually done with `ldapsearch` and thats how i work with all my AD queries from my machine. It allows me to target specific OU’s and is generally easier to work with than dscl for me, and I don’t need to have my machine bound to AD for it to work.

So first lets start with a basic ldapsearch to get the users objectGUID.

This should return an output like this:

dn: CN=Smith\, John,OU=Accounts,OU=My Users,OU=My House,OU=My Room,DC=My,DC=Domain
objectGUID:: TE0kyRyv8UCppPeXes5JTg==
sAMAccountName: john.smith

Now the objectGUID here does not match what we see in AD with Apache Directory Studio, and that is because it is encoded in base64 as denoted by the double colons "::" after the objectGUID label.

So to convert this into something we can work with we need to decode it from base64 and then hex dump it.

So to achieve that we use the following function:

This then converts our objectGUID from ldapsearch into:

C9244D4C-AF1C-40F1-A9A4-F7977ACE494E

By now we should have all the bits we need to run a script to
1. Pull the objectGUID from AD using ldapsearch
2. Convert that objectGUID from Base64 into text
3. Convert the first 32 Bit from hex to decimal
4. Decide if that decimal value is larger than the maximum for a 32 Bit integer
5. If it is larger, we then know what number to replace the first character of that objectGUID with
6. Recalculate that new objectGUID into decimal to determine the UID the AD Plugin will set for that user on the Mac.

Completed script
With the script below you can target a user account DN in the search base and it will return that users DN and ObjectGUID in clear text and also the UID that will be used on a Mac when that user logs in.

Bonus points
For bonus points, you might want to target a container of Users, say OU=Users and then iterate through that container outputting the UID’s for those users so you can then check for duplicates.

So here is an ugly bash script that does just that.

alerts

Automating macOS Server alerts

Update: 14-11-2016

So I thought I better add support for adding multiple email addresses.

Each email address that gets imported needs an updated index number to go into the Z_PK column. So the script will now import multiple email addresses and assign a Z_PK index number for each email address.

The script now will read in a CSV file instead of taking stdin for input. You will need to specify the location of the CSV file in the script, or modify as needed.

Update: 17-10-2016

So after a bit of further testing, it appears that the alertData.db does not get created automatically at installation of Server.app and requires the Alert tab be selected in Server.app for the db to be created. This presented a problem for me as I am automating the deployment of these macOS server machines and I want to include the alert email settings with zero touch. So some more digging around with the alertData.db and I was able to find the tables and values needed to create a bare database that enables the caching service to be enabled for alerts. The updated script now will create the alertData.db if it does not exist, enable email alerts for caching server (no other services are enabled) and then sets the notification recipients email address.

If you wish to enable notifications for extra services such as Mail, you should add a ‘1’ to the ZMAILENABLED and ZPUSHENABLED columns for the relevant service in the script where it inserts these values to the relevant column.

For example:
The table (ZADALERTBUNDLE) contains the following column names and types:

(Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ZENABLED INTEGER, ZMAILENABLED INTEGER, ZPUSHENABLED INTEGER, ZBUNDLE VARCHAR, ZNAME VARCHAR)

The columns we are interested in are ZMAILENABLED and ZPUSHENABLED. These columns will accept a data type of: INTEGER. In actual fact, this is really a boolean with either a 0 (False) or a 1 (True) value assigned.

Here is an example of what having the caching service enabled for mail and push notifications looks like in our alertData.db.
You can also see that the Mail service has mail and push notifications disabled

Z_PK        Z_ENT       Z_OPT       ZENABLED    ZMAILENABLED  ZPUSHENABLED  ZBUNDLE            ZNAME      
----------  ----------  ----------  ----------  ------------  ------------  -----------------  -----------
2           2           2           1           1             1             Caching            Caching    
5           2           2           1           0             0             Mail               Mail       

Note the two ‘0’s in the ZMAILENABLED and ZPUSHENABLED columns mentioned earlier, setting these to 1 means the service will have push and email alerts enabled for it.

Now we know what controls the enabled/disabled checkboxes in the alerts section of server.app it is trivial to modify this directly in the alertsData.db.

Original post below:

In macOS Server it used to be possible to edit the alert settings with something like :

serveradmin settings info:notifications:certificateExpiration:who = john.smith@contoso.com

Unfortunately it appears that it is no longer possible to add an email address to the alert settings via the command line.

This presented an issue for me as I automate the deployment of many macOS servers where we want to have an email address entered in the alert settings so that user receives notifications from their server.

After a little bit of digging the location where this information is now saved was found in

/Library/Server/Alerts/alertData.db

Specifically it is stored in a TABLE called ZADMAILRECIPIENT in a COLUMN called ZADDRESS VARCHAR

So now we know that, all we have to do is work out a way of adding in our desired values to that table.

The quick and dirty of it is something like this:

sqlite /Library/Alerts/alertData.db "INSERT or REPLACE INTO ZADMAILRECIPIENT VALUES('2','4','1','1','john.smith@apple.com','en')"

But in my quest to force myself into using python, I wrote a quick python script that takes the email address as the first argument and then writes this into our db, which makes it easy to put into my deployment workflow.

To use it, simply run it like this:

./script.py

You will need to edit the location of your csv file. Currently this is set in line number 96 of the script.

The script is as below.

fusioncoverv-100012735-gallery

Your one stop formatting shop

Update 22 Dec 2016

So I finally got around to creating a new NBI based on 10.12.
When I tried to run this script on 10.12, I found that Apple had changed the output in the diskutil command.

So previously I was searching for the text of either yes or no for the removable media label.
Under 10.12, this is no longer the case, with Apple replacing the word “No” with “fixed”

To combat this, the only thing to do is, of course, use python, for which I have a love hate relationship. So in the interests of just getting it done, I have updated the script replacing the following:

 $(echo "$DISK_INFO" | awk '/Solid State:/ {print $3}') = "No")

with

$(diskutil info -plist $DISK | plutil -convert json -o - - | python -c 'import sys, json; print json.load(sys.stdin)["SolidState"]')

This gets the output from diskutil as a plist, converts it into json and then uses python to print out the value for the key ‘SolidState’ which is returned as a boolean (true/false)

This is much better than parsing text which may change in the future.

Update – 9 Aug 2016

Well so it turns out I made some assumptions in my first draft of the formatting script around how to identify a machine for Fusion Drive. Turns out I also left out what to do if a FileVault enabled disk was discovered. I have updated the script to handle all these cases.

The script should now detect the correct device ID’s for the SSD and HDD if found. It will also check to see if a FileVault disk is locked or unlocked. If it is unlocked, it will proceed to delete the core storage volume. If the FileVault volume is locked, it will throw an error to the user via a CocoaDialog message box.

The script will also now check to ensure that the drives it formats are internal, physical, non removable media disks. As SD cards can often present as internal, physical this could be a complication. Luckily they also show up as removable, so checking this we can avoid formatting any SD cards that may be in the machine.

As I also do a lot of testing with VMware Fusion, I have a check in the script to ensure that VMware disks are formatted correctly as well. This is because VMware fusion disks show up as external, physical disks rather than internal, physical disks.

 

 

 

In my environment I use DeployStudio and Imagr to deploy our “image” to client devices.

Recently I came across an issue with some iMacs that have a Fusion drive.

When I use DeployStudio, I was targeting the restore task to “First drive available”

Screen Shot 2016-06-22 at 10.56.39 AM

This had always worked very well for me in the past, however I noticed that a few of the latest iMacs had failed to complete the image build (via Munki) due to a full HD.

When I checked their computer record in Munki Report it was pretty clear what had happened.

storage

storage2

For some reason, the fusion drive has ‘un-fused’ and DeployStudio has installed our base OS image onto the SSD.

Turns out this is a bit of an issue with DeployStudio. There are quite a few posts on the deploy studio forums about this.

There are a few solutions out there like having a workflow that is for Fusion Drive Mac’s that uses DeployStudio’s Fusion Drive task first and then you can just target the volume name of your new Fusion Drive in the Restore Task

Screen Shot 2016-06-22 at 11.13.10 AM

But I really like to have One Workflow To Rule Them All! So I didn’t like that solution, also users don’t know if their Mac has a fusion drive or not so there is confusion there.

Instead I came up with a script that will check a machine to see if it has a Fusion Drive or atleast the components for a fusion drive i.e. SSD and HDD.

The script will then create a new Fusion Drive, deleting any current FD if it already exists, create a new volume on the Fusion Drive called Macintosh HD.

The script will also be able to tell if the machine does not have a fusion drive, in this case the script will simply locate the internal HDD or SSD and format it and create a volume called Macintosh HD.

So now I simply run this script as the first item in the workflow and ensure that my Restore Task targets my new volume called Macintosh HD whether it be on a Fusion Drive LVG or a regular JHFS+ Partition.

The contents of the script are as below:

 

 

 

 

cache

When Apple Caching Server just can’t even right now

Update: This was logged as a bug to Apple and has been resolved in iOS 10 and macOS 10.12

See http://www.openradar.me/radar?id=4958891762778112 for details

Background

Apple Caching Server is pretty cool and it really makes a lot of sense in a large environment.

However, large environments often have a rather complex network topology which makes configuration and troubleshooting a little more difficult.

I just happen to work in a very large environment with a complex network topology.

We have many public WAN IP’s which our client devices and Apple caching servers use to get out to the internet – via authenticated proxies no less.

Apple has some pretty good, although a bit ambiguous in parts, documentation on configuring Apple Caching for complex networks here: http://help.apple.com/serverapp/mac/5.1/#/apd6015d9573

Essentially we have a network that looks a little bit like this:

complex

 

Apple Caching server supports this network topology, however we need to provide our client devices access to a DNS TXT service record in their default search domain so the client device will know all of our WAN IP ranges.

So how does this caching server thing work on the client anyway?

There is a small binary/framework on the client device that does a ‘discovery’ of Apple caching servers approximately every hour – or if it has not yet populated a special file on disk, it will run immediately when requested by a content downloading service such as the App Store.

This special binary does this discovery by grabbing some info about the client device such as the LAN IP address and subnet range, and then it looks up our special DNS Text record ( _aaplcache._tcp. ) and sends all of this data to the Apple Locator service at: lcdn-locator.apple.com

Apple then matches the WAN IP ranges and the LAN IP  ranges provided and sends back a config that the special process writes out to disk. This config file contains the URL of a caching server that it should use (if one has been registered)

This special file on disk is called diskcache.plist, if it has been able to successfully locate a caching server, you should see in this file a line like this:

"localAddressAndPort" => "10.10.10.10:49313"

Where 10.10.10.10:49313 is the IP address and port of the caching server the client should use.

Now this diskcache.plist file exists in a folder called com.apple.AssetCacheLocatorService inside /var/folders. The exact location is stored in the DARWIN_USER_CACHE_DIR variable. This can be revealed by running:

getconf DARWIN_USER_CACHE_DIR

Which should output a directory path like this:

/var/folders/yd/y87k7kk14494j_9c0y814r8c0000gp/C/

Then you can just use plutil -p to read the diskCache.plist

sudo plutil -p /var/folders/yd/y87k7kk14494j_9c0y814r8c0000gp/C/com.appleAssetCacheLocatorService/diskCache.plist

And it should give you some output like this

*Thanks to n8felton for the info about the /var/folders !

Now all of this is fine and no problem, it all works as expected.

c4jt321

Except when it doesn’t.

At some sites, we were seeing a failure of client devices to pull their content from their caching server. The client device would simply pull its content over the WAN.

After a lot of trial and error and wire-sharking (is that a thing?) we found the problem.

As I mentioned earlier we were having _some_ client devices not able to pull their content from the caching server. After investigation on the client we found that they were not populating their diskcache.plist with the information we need from the apple locator service.

How come?

Well in our environment, we utilise a RODC at each site. This AD RODC (Read only domain controller) also operates as a DNS server. It is also the primary DNS server that is provided to clients via DHCP.

We have a few “issues” with our RODCs from time to time and quite often we just shut them down and let the clients talk to our main DC’s and DNS servers over the WAN. However, when we shutdown the RODC’s we don’t remove them from the DHCP servers DNS option. So clients still receive a DHCP packet with a primary DNS server of our now turned off RODC DNS server, they also receive a secondary, and third DNS server that they will use.

As expected the clients seem quite happy with this, the clients are able to perform DNS lookups and browse the internet as expected even though their primary DNS server is non-responsive.

BUT it seems that the special little caching service discovery tool on the client devices does not fail over and use the secondary (or third) DNS server. It seems that this tool only does the DNS lookup for our TXT record against the primary DNS server.

So because this DNS TXT record lookup fails, the caching service discovery tool doesn’t get a list of WAN IP address ranges to send to the Apple locator URL and thus never gets a response back about which caching server it should use!

The fix.

Once we manually remove the non-responsive primary DNS server from the DHCP packet, so the client device now only gets our 2 functional DNS servers as the primary and secondary servers, the caching service discovery tool is able to lookup our DNS TXT record and receive the correct caching server URL from the Apple locator service and everything is right in the world again!

Rainbows-Unicorns-Button

fast_user_switching_2x

Enable Fast User Switching without the Menu Bar Item

In my environment we have the require password after sleep or screen saver begins enabled.

Screen Shot 2016-03-07 at 4.11.00 PM

This prevents anyone from walking up to a machine that may be asleep or in screen saver mode and then using that machine and having access to the previous users data.

This is all fine, however there are often times when a user has forgotten to log out and left the machine in sleep or screensaver mode and gone home. A typical example of this is in computer lab environments.

When the machine is woken up, the only option is to enter the currently logged in users password to unlock the machine, or hit the cancel button which either puts the machine back to sleep or screen saver. If a user or administrator wanted to shutdown/restart or log in to this machine as another user, this would not be possible.

For example, here we have a screen shot of a machine that has the require password after sleep or screen saver set. When the machine is woken up the user is presented with only the option to cancel or enter the users password. There is no option to enter an admin password to over ride. There is also no option to shutdown or restart. A hard shutdown is required if the user can not enter their password.

Screen Shot 2016-05-10 at 12.24.24 PM

 

Enter Fast User Switching.

Fast user switching has been around for a long time and is very useful. It allows you to switch currently logged in users without having to log out. You can also shutdown or restart the machine even when another user is logged in. Great for lab machines where a user has forgotten to log off.

For example, here is a picture of the same machine as above, this time with Fast User Switching enabled. As you can see we now have the option to switch user.

Screen Shot 2016-05-10 at 12.38.47 PM

Enabling Fast User Switching is pretty easy, you simple click the check box in system preferences.

Screen Shot 2016-03-07 at 4.15.52 PM

The down side of this is that by default it add the fast user switching menu item to the menu bar of all users. This might not be desirable in your environment, it certainly isn’t in mine.

fast_user_switching_2x

So I needed a way to programatically enable Fast User Switching and also disable the Fast User Switching menu item.

Configuration Profiles

Configuration Profiles to the rescue!

The preference domain that controls Fast User Switching is .GlobalPreferences. We can easily manage this by setting the MultipleSessionEnabled key to TRUE in /Library/Preferences/.GlobalPreferences

This can be achieved with a configuration profile like this

Now we just need to remove the menu item that pops up in every users menu bar.

This can be controlled by using a configuration profile that manages the com.apple.mcxMenueExtras preference domain. By setting the User.menu key to FALSE The User.menu is the name of the Fast User Switching Menu Item (Found in /System/Library/CoreServices/Menu Extras)

Below is a configuration profile that ensures this menu item is not visible.

By installing both of these configuration profiles on our machines, I was able to enable FUS, but make sure that the menu item was not visible to our users. Win Win!

google_chrome

Managing Google Chrome on Mac OS X

I had a request to add the Google Chrome web browser to our builds. This brought about a little challenge in that Google Chrome does not fully utilise MCX / Config profiles to control all of its settings, so its not quite as easy to manage as Safari.

With Firefox, we use the CCK to generate autoconfig files. We then have AutoPKG automatically download the latest ESR and add the CCK autoconfig files to the app bundle before wrapping it up in a nice installer package that is then imported directly into Munki which makes my life very easy. Hat tip to Greg Neagle for his AutoPKG recipes.

I was hoping to find something to make my life easier with Google Chrome but alas my Google-Fu failed me.

Here is what I have come up with that gets the job done for my environment.

So the first thing was to work out what we actually wanted to manage or setup for the user.

Items to manage

  • Disable Google Auto Updates
  • Set default home page
  • Set the home page to open on launch, but not on new creation of new tabs or pages
  • Disable default browser check
  • Disable first run welcome screen
  • Skip import history/bookmarks ect ect
  • Disable saving of passwords

Config Profiles

So it turns out that one item is able to be managed via a config profile. Disabling of the Google Auto Update. This is disabled by simply setting the checkInterval to 0

This then causes the Google Keystone auto update mechanism to never check for updates.

To create a profile for this, I first created the plist with the setting i wanted with the following command

defaults write com.google.Keystone.Agent checkInterval 0

Then I used MCX to Profile to generate a config profile from this plist. I won’t go into the details on how to create a profile from a plist with MCX to Profile because Tim has already written good documentation on his site.

Check it out at https://github.com/timsutton/mcxToProfile

Chrome Master Preferences

To manage pretty much everything else we will have to create some text files.

Google uses a file called “Google Chrome Master Preferences” This file can contain some basic preference settings that will be applied. It should be stored in /Library/Google/Google Chrome Master Preferences

Below is the content of my Master Preferences file, its just plain JSON

Application Support

Chrome also requires some files to be placed in ~/Library/Application Support/Google/Chrome

These files set the rest of our preferences and also prevent the welcome screen/ first run screen from being shown at first launch

So first create a file called Preferences, this is in the same JSON format and looks similar to the Google Chrome Master Preferences file however some of the settings in this file can not be made in the Google Master Preferences file for some reason.

My file looks like this:

Now create a folder called Default inside ~/Library/Application Support/Google/Chrome and place the Preferences file inside this Default folder.

That will set up the default preferences.

First Run

Now to disable the first run / welcome screen, we have to create an empty file called First Run inside the ~/Library/Application Support/Google/Chrome folder. This can easily be achieved by simply using the touch command ie.

touch "First Run"

Putting it all together

So now we have all the pieces we need, how do we deploy it to client machines?

Package it all up and craft some pre/post flight scripts.

Creating the package

First create a package that deploys our Google Chrome Master Preferences file into /Library/Google

We also need to store the other files that need to go into the users home folder. What I like to do is to store those items in the Scripts folder in /Library. Then I can copy them from there with a script later.

I like using Whitebox Packages to create my pkg’s

This is what my package looks like:

Screen Shot 2016-01-07 at 1.31.06 PM

Now we get to the scripting part.

Pre-install script

First we will start with a pre-install script that will remove any pre-existing content, so that if we need to update these preferences later we can be sure that our package will remove any items before installing the new items.

Post-install script

Once our package has placed our google preference files onto the machine, we will now run our post install script which will then install these files into the System User Template, as well as go through any existing home folders on the machine and add them to their home directories.

This is basically what Casper users refer to as FUT (Fill User Template) and FEU (Fill Existing Users (Folders))

Add the two scripts as preinstall and postinstall scripts to the package and build it.

Screen Shot 2016-01-07 at 1.49.08 PM

Deploying it

Now we have an installer package and a config profile.

I import both these items into Munki and make them an update_for Google Chrome which gets imported automatically by AutoPKG. Now when Munki installs Google Chrome it also installs the Config profile and our preferences package and the user gets a nice experience with no nagging screens.

 

simples_fb_1567463

 

Screen Shot 2015-11-17 at 4.39.18 pm

Fun with Microsoft Office 2016

Some background:

In my organisation I deploy software with Munki, previously with Office 2011, it was pretty easy to deploy and get a fully up to date install of Office 2011

  • Install the full volume licence installer from the VLSC site which was version 14.3.0 and around 1Gb
  • Let Munki apply the latest combo updater it has (14.5.8) and around 120Mb

And we’re done. Pretty easy and painless and about 1.1Gb of data to send to the client.

However, in Office 2016, Microsoft has sandboxed their applications, which is the right thing do™. However what this means is that any shared content the apps use such as frameworks, fonts, and proofing tools all need to be contained within each application bundle. Previously in Office 2011, the apps could get this content from a shared location such as /Library/Application Support

This means that our Office 2016 installer package is only about 1.3Gb, the installer then just copies the same files into each app bundle at install time via a postinstall script.  That results in each app being rather large as you can see here.

Screen Shot 2015-11-17 at 4.44.27 pm

 

 

 

 

 

It also means that the Office 2016 updates that Microsoft offer for each app are huge, approx 800Mb per app.

So now if we applied our same methodology of deploying Office 2011 to Office 2016 we would end up with something like this:

  • Install the full Office 2016 VL installer (1.3GB)
  • Let Munki apply each app update ~800Mb times 5 apps (Word, Excel, PowerPoint, OneNote, Outlook)

That means we are pushing about 5gb to the client just to install office. Thats insane.

Solution?

Well we only need the full VL installer for its special licensing package that generates the volume licence plist (/Library/Preferences/com.microsoft.office.licensingV2.plist)

Microsoft offer the latest version of the suite in a package that contains all the apps and is SKU-less, meaning no licensing (It can be licenced as 0365 or VL)

So we could just download the latest suite package which is about 1.3GB, then install that on our client machines, install the special licensing package on top to licence the suite and we’re done. We would only need to push about 1.3Gb to the client to have a fully up to date office 2016 installation. Thats much more manageable for remote sites with slow links, or even labs with lots of machines.

Any new updates to the apps would simply mean downloading the full suite package again, approx 1.3GB and pushing that to clients, still 10 times more than office 2011 combo updaters however, but much smaller than pushing each 2016 app update to the client.

Word is that Microsoft are working on Delta updates that will be much much smaller, but until then this might be a workable solution.

Where to get the latest full installer package?

The full suite package is available via the following FWLinks, its available via a CDN so choose the closest location to you for the fastest download.

Dublin:
Office 2016 for Mac suite: http://go.microsoft.com/fwlink/?LinkID= 532572

Puerto Rico:
Office 2016 for Mac suite: http://go.microsoft.com/fwlink/?LinkID=525133

Singapore:
Office 2016 for Mac suite: http://go.microsoft.com/fwlink/?LinkID= 532577

Wait what? How do I get that special licensing package though?

If you open the VL installer package that you got from the Microsoft VLSC site with something like Pacifist, you will see the package we are talking about

Screen Shot 2015-11-17 at 4.23.23 pm

This package “Office15_all_volume_licensing.pkg” is the one we are after.

To extract just this package we need to expand the Volume_Installer.pkg and then flatten the volume_licencing.pkg

So bust open the terminal and use these commands:

Unpack the installer package into a new directory on our desktop. Note: this new directory should not already exist, the pkgutil command will create it

pkgutil --expand /Volumes/Office\ 2016\ VL/Microsoft_Office_2016_Volume_Installer.pkg ~/Desktop/Office_2016_Unpacked

Now lets flatten the licensing package and save it on our desktop

pkgutil --flatten ~/Desktop/Office_2016_Unpacked/Office15_all_volume_licensing.pkg ~/Desktop/Office_2016_VL_serializer.pkg

So now we have standalone serialised package that can be deployed to any machine and it will generate the volume licence plist that office 2016 looks for.

Why not just package up and deploy the com.microsoft.office.licensingV2 plist?

In the past, a lot of people, myself included, would often just create a package of a pre-created license plist from one machine and then deploy that package to multiple machines. That was enough for Office to pickup the correct licensing and everything seemed good in the world.

However the official word from Microsoft is that this is bad ju-ju and we should stop doing this as it is unsupported and may break your Office install. Instead run the serializer package on every machine that requires it.

Microsoft plans to make the serialiser package available as a standalone package on the VLSC ISO, so this extracting and flattening process will soon be redundant. Until then this might help you out.