Enable Fast User Switching without the Menu Bar Item

In my environment we have the require password after sleep or screen saver begins enabled.

Screen Shot 2016-03-07 at 4.11.00 PM

This prevents anyone from walking up to a machine that may be asleep or in screen saver mode and then using that machine and having access to the previous users data.

This is all fine, however there are often times when a user has forgotten to log out and left the machine in sleep or screensaver mode and gone home. A typical example of this is in computer lab environments.

When the machine is woken up, the only option is to enter the currently logged in users password to unlock the machine, or hit the cancel button which either puts the machine back to sleep or screen saver. If a user or administrator wanted to shutdown/restart or log in to this machine as another user, this would not be possible.

For example, here we have a screen shot of a machine that has the require password after sleep or screen saver set. When the machine is woken up the user is presented with only the option to cancel or enter the users password. There is no option to enter an admin password to over ride. There is also no option to shutdown or restart. A hard shutdown is required if the user can not enter their password.

Screen Shot 2016-05-10 at 12.24.24 PM

 

Enter Fast User Switching.

Fast user switching has been around for a long time and is very useful. It allows you to switch currently logged in users without having to log out. You can also shutdown or restart the machine even when another user is logged in. Great for lab machines where a user has forgotten to log off.

For example, here is a picture of the same machine as above, this time with Fast User Switching enabled. As you can see we now have the option to switch user.

Screen Shot 2016-05-10 at 12.38.47 PM

Enabling Fast User Switching is pretty easy, you simple click the check box in system preferences.

Screen Shot 2016-03-07 at 4.15.52 PM

The down side of this is that by default it add the fast user switching menu item to the menu bar of all users. This might not be desirable in your environment, it certainly isn’t in mine.

fast_user_switching_2x

So I needed a way to programatically enable Fast User Switching and also disable the Fast User Switching menu item.

Configuration Profiles

Configuration Profiles to the rescue!

The preference domain that controls Fast User Switching is .GlobalPreferences. We can easily manage this by setting the MultipleSessionEnabled key to TRUE in /Library/Preferences/.GlobalPreferences

This can be achieved with a configuration profile like this

Now we just need to remove the menu item that pops up in every users menu bar.

This can be controlled by using a configuration profile that manages the com.apple.mcxMenueExtras preference domain. By setting the User.menu key to FALSE The User.menu is the name of the Fast User Switching Menu Item (Found in /System/Library/CoreServices/Menu Extras)

Below is a configuration profile that ensures this menu item is not visible.

By installing both of these configuration profiles on our machines, I was able to enable FUS, but make sure that the menu item was not visible to our users. Win Win!

Advertisements

Managing Google Chrome on Mac OS X

I had a request to add the Google Chrome web browser to our builds. This brought about a little challenge in that Google Chrome does not fully utilise MCX / Config profiles to control all of its settings, so its not quite as easy to manage as Safari.

With Firefox, we use the CCK to generate autoconfig files. We then have AutoPKG automatically download the latest ESR and add the CCK autoconfig files to the app bundle before wrapping it up in a nice installer package that is then imported directly into Munki which makes my life very easy. Hat tip to Greg Neagle for his AutoPKG recipes.

I was hoping to find something to make my life easier with Google Chrome but alas my Google-Fu failed me.

Here is what I have come up with that gets the job done for my environment.

So the first thing was to work out what we actually wanted to manage or setup for the user.

Items to manage

  • Disable Google Auto Updates
  • Set default home page
  • Set the home page to open on launch, but not on new creation of new tabs or pages
  • Disable default browser check
  • Disable first run welcome screen
  • Skip import history/bookmarks ect ect
  • Disable saving of passwords

Config Profiles

So it turns out that one item is able to be managed via a config profile. Disabling of the Google Auto Update. This is disabled by simply setting the checkInterval to 0

This then causes the Google Keystone auto update mechanism to never check for updates.

To create a profile for this, I first created the plist with the setting i wanted with the following command

defaults write com.google.Keystone.Agent checkInterval 0

Then I used MCX to Profile to generate a config profile from this plist. I won’t go into the details on how to create a profile from a plist with MCX to Profile because Tim has already written good documentation on his site.

Check it out at https://github.com/timsutton/mcxToProfile

Chrome Master Preferences

To manage pretty much everything else we will have to create some text files.

Google uses a file called “Google Chrome Master Preferences” This file can contain some basic preference settings that will be applied. It should be stored in /Library/Google/Google Chrome Master Preferences

Below is the content of my Master Preferences file, its just plain JSON

Application Support

Chrome also requires some files to be placed in ~/Library/Application Support/Google/Chrome

These files set the rest of our preferences and also prevent the welcome screen/ first run screen from being shown at first launch

So first create a file called Preferences, this is in the same JSON format and looks similar to the Google Chrome Master Preferences file however some of the settings in this file can not be made in the Google Master Preferences file for some reason.

My file looks like this:

Now create a folder called Default inside ~/Library/Application Support/Google/Chrome and place the Preferences file inside this Default folder.

That will set up the default preferences.

First Run

Now to disable the first run / welcome screen, we have to create an empty file called First Run inside the ~/Library/Application Support/Google/Chrome folder. This can easily be achieved by simply using the touch command ie.

touch "First Run"

Putting it all together

So now we have all the pieces we need, how do we deploy it to client machines?

Package it all up and craft some pre/post flight scripts.

Creating the package

First create a package that deploys our Google Chrome Master Preferences file into /Library/Google

We also need to store the other files that need to go into the users home folder. What I like to do is to store those items in the Scripts folder in /Library. Then I can copy them from there with a script later.

I like using Whitebox Packages to create my pkg’s

This is what my package looks like:

Screen Shot 2016-01-07 at 1.31.06 PM

Now we get to the scripting part.

Pre-install script

First we will start with a pre-install script that will remove any pre-existing content, so that if we need to update these preferences later we can be sure that our package will remove any items before installing the new items.

Post-install script

Once our package has placed our google preference files onto the machine, we will now run our post install script which will then install these files into the System User Template, as well as go through any existing home folders on the machine and add them to their home directories.

This is basically what Casper users refer to as FUT (Fill User Template) and FEU (Fill Existing Users (Folders))

Add the two scripts as preinstall and postinstall scripts to the package and build it.

Screen Shot 2016-01-07 at 1.49.08 PM

Deploying it

Now we have an installer package and a config profile.

I import both these items into Munki and make them an update_for Google Chrome which gets imported automatically by AutoPKG. Now when Munki installs Google Chrome it also installs the Config profile and our preferences package and the user gets a nice experience with no nagging screens.

 

simples_fb_1567463

 

Fun with Microsoft Office 2016

Some background:

In my organisation I deploy software with Munki, previously with Office 2011, it was pretty easy to deploy and get a fully up to date install of Office 2011

  • Install the full volume licence installer from the VLSC site which was version 14.3.0 and around 1Gb
  • Let Munki apply the latest combo updater it has (14.5.8) and around 120Mb

And we’re done. Pretty easy and painless and about 1.1Gb of data to send to the client.

However, in Office 2016, Microsoft has sandboxed their applications, which is the right thing do™. However what this means is that any shared content the apps use such as frameworks, fonts, and proofing tools all need to be contained within each application bundle. Previously in Office 2011, the apps could get this content from a shared location such as /Library/Application Support

This means that our Office 2016 installer package is only about 1.3Gb, the installer then just copies the same files into each app bundle at install time via a postinstall script.  That results in each app being rather large as you can see here.

Screen Shot 2015-11-17 at 4.44.27 pm

 

 

 

 

 

It also means that the Office 2016 updates that Microsoft offer for each app are huge, approx 800Mb per app.

So now if we applied our same methodology of deploying Office 2011 to Office 2016 we would end up with something like this:

  • Install the full Office 2016 VL installer (1.3GB)
  • Let Munki apply each app update ~800Mb times 5 apps (Word, Excel, PowerPoint, OneNote, Outlook)

That means we are pushing about 5gb to the client just to install office. Thats insane.

Solution?

Well we only need the full VL installer for its special licensing package that generates the volume licence plist (/Library/Preferences/com.microsoft.office.licensingV2.plist)

Microsoft offer the latest version of the suite in a package that contains all the apps and is SKU-less, meaning no licensing (It can be licenced as 0365 or VL)

So we could just download the latest suite package which is about 1.3GB, then install that on our client machines, install the special licensing package on top to licence the suite and we’re done. We would only need to push about 1.3Gb to the client to have a fully up to date office 2016 installation. Thats much more manageable for remote sites with slow links, or even labs with lots of machines.

Any new updates to the apps would simply mean downloading the full suite package again, approx 1.3GB and pushing that to clients, still 10 times more than office 2011 combo updaters however, but much smaller than pushing each 2016 app update to the client.

Word is that Microsoft are working on Delta updates that will be much much smaller, but until then this might be a workable solution.

Where to get the latest full installer package?

The full suite package is available via the following FWLinks, its available via a CDN so choose the closest location to you for the fastest download.

Dublin:
Office 2016 for Mac suite: http://go.microsoft.com/fwlink/?LinkID= 532572

Puerto Rico:
Office 2016 for Mac suite: http://go.microsoft.com/fwlink/?LinkID=525133

Singapore:
Office 2016 for Mac suite: http://go.microsoft.com/fwlink/?LinkID= 532577

Wait what? How do I get that special licensing package though?

If you open the VL installer package that you got from the Microsoft VLSC site with something like Pacifist, you will see the package we are talking about

Screen Shot 2015-11-17 at 4.23.23 pm

This package “Office15_all_volume_licensing.pkg” is the one we are after.

To extract just this package we need to expand the Volume_Installer.pkg and then flatten the volume_licencing.pkg

So bust open the terminal and use these commands:

Unpack the installer package into a new directory on our desktop. Note: this new directory should not already exist, the pkgutil command will create it

pkgutil --expand /Volumes/Office\ 2016\ VL/Microsoft_Office_2016_Volume_Installer.pkg ~/Desktop/Office_2016_Unpacked

Now lets flatten the licensing package and save it on our desktop

pkgutil --flatten ~/Desktop/Office_2016_Unpacked/Office15_all_volume_licensing.pkg ~/Desktop/Office_2016_VL_serializer.pkg

So now we have standalone serialised package that can be deployed to any machine and it will generate the volume licence plist that office 2016 looks for.

Why not just package up and deploy the com.microsoft.office.licensingV2 plist?

In the past, a lot of people, myself included, would often just create a package of a pre-created license plist from one machine and then deploy that package to multiple machines. That was enough for Office to pickup the correct licensing and everything seemed good in the world.

However the official word from Microsoft is that this is bad ju-ju and we should stop doing this as it is unsupported and may break your Office install. Instead run the serializer package on every machine that requires it.

Microsoft plans to make the serialiser package available as a standalone package on the VLSC ISO, so this extracting and flattening process will soon be redundant. Until then this might help you out.

 

 

 

Updating Docker with Puppet on CentOS 7

I have a lot of CentOS 7.1 hosts that are under config management with Puppet. These CentOS hosts run some Docker containers to provide services to Mac devices.

I’ve been on the Docker bandwagon (ship?) for a while now so when these hosts were created and put into use, the latest and greatest version of Docker was 1.5.

Recently Docker sent out an email alert me that they will soon be changing their Docker Hub so that docker versions prior to 1.7 will no longer be able to push and pull from their hub.

Dear Docker Hub user
Last spring Docker released Engine version 1.6 and Registry version 2. These introduced an updated push / pull protocol featuring faster image transfers, simplified image definition, and improved security. The Docker community has aggressively adopted them, and as a result over 99% of Docker Hub usage is based on these newer version. As a result we are deprecating support on Docker Hub for clients version 1.5 and earlier.

* On November 19, 2015 Docker clients version 1.5 and earlier will not be able to push images to Docker Hub. They will still be able to pull images. And of course the repositories are fully accessible via newer versions of the Docker client.

* On December 7, 2015, pulls via clients 1.5 and earlier will be disabled. Only version 1.6 or later will be supported.

Handling this migration is simple; all that you need to do is upgrade your Docker client to version 1.6 or later. Please be sure to upgrade any clients that are pushing or pulling from your repository, including those on development machines, product servers, or that are part of CI and CD workflows.

If you have any questions, please do not hesitate to contact us.

Best regards,

The Docker Hub Team

So I thought I better upgrade the Docker binary on my hosts.

I manage Docker with a pretty basic puppet manifest that looks a bit like this

So I updated the package ensure section of the manifest to include the version that I want my hosts to use, in this case its 1.7.1-115.el7

However when I ran this on a host I found that after the Docker binary had been updated, my containers which were started with the –restart-always tag did not automatically start back up.

Even trying to stop them and start them again with docker stop <container name> docker start <container name> failed

Further digging found error messages like this:

err="Cannot start container netboot_server: [8] System error: Unit docker-a643761a31b77c798057b9036f8bc4c3802e4831608dcb3956255729f414ece4.scope already exists." statusCode=500

So thats kind of weird.

Telling systemctl to stop that unit and restarting docker fixed the issue so I knew it wasn’t the end of the world

So running the following brought my containers back online:

systemctl stop docker-a643761a31b77c798057b9036f8bc4c3802e4831608dcb3956255729f414ece4.scope

systemctl restart docker

But thats not really nice, and how do we make sure that we get the right name of the docker container?

Well its pretty easy to get the name of the docker units by doing something like:

systemctl list-units --type=scope | awk -F " " '/docker/ {print $1}'

In my case this gave me two containers which is what I have running on my hosts.

So now I just needed to add a pipe to head to get the first result:

systemctl list-units --type=scope | awk -F " " '/docker/ {print $1}' | head -n1

And for the second result I pipe it to tail to get the last result

systemctl list-units --type=scope | awk -F " " '/docker/ {print $1}' | tail -1

Then to put it all together as a single command to run

systemctl stop $(systemctl list-units --type=scope | awk -F " " '/docker/ {print $1}' | head -n1); systemctl stop $(systemctl list-units --type=scope | awk -F " " '/docker/ {print $1}' | tail -1); systemctl restart docker

So now we have a command we can run to stop our stuck docker containers and restart the docker daemon and bring our containers back up.

To add this into our puppet manifest, I created an exec and set it to subscribe to the Package[‘docker’]

So the final manifest looks like this (Note that I had to escape the quotes in the puppet command so i used single quotes for everything and escaped them with backslashes)

Resizing El Capitan Mac volumes under VMWare Fusion

Apple has made some changes to Disk Utility in OS X 10.11 El Capitan. One of the biggest changes is how the partition tab looks and functions.

Under Yosemite 10.10 and previous it was quite simple to increase the size of a virtual hard disk in VMWare Fusion and then simply expand the volume on the guest Mac OS VM

Increase the size of the virtual disk in VMWare Fusion

Screen Shot 2015-10-21 at 12.53.53 pm

Open up Disk Utility in the guest VM

Screen Shot 2015-10-21 at 12.59.39 pm

Note that the VMWare Virtual Disk is showing as 107.37GB, yet the Macintosh HD volume is showing as a capacity of 33.5GB

To expand the Macintosh HD volume to the maximum, you just needed to click on the VMWare Virtual Disk, click the partition tab and drag the slider in the bottom right hand corner all the way to the bottom and note the size will change under the Partition Information.

Screen Shot 2015-10-21 at 1.02.21 pm

Drag the slider to the bottom

Screen Shot 2015-10-21 at 1.02.29 pm

Hit apply and the volume will be expanded. Pretty easy!

Screen Shot 2015-10-21 at 1.03.45 pm

But in 10.11 we are presented with this

Screen Shot 2015-10-21 at 1.05.42 pm

Screen Shot 2015-10-21 at 1.05.48 pm

So we can see that the VMWare Virtual Disk is 94.49GB, but our Macintosh HD volume is only 34.57GB

Hitting the partition button gives us this

Screen Shot 2015-10-21 at 1.13.24 pm

And Disk Utility can’t seem to see the extra space!

Grrr.

Fortunately its a trivial task to achieve the same goal using the command line.

So bust open Terminal.app and enter the following command

sudo diskutil resizeVolume / R

This will resize the Volume at / (Macintosh HD) to the maximum (R)

Screen Shot 2015-10-21 at 1.23.49 pm

De-Clouding Adobe Acrobat DC

Adobe has updated their Acrobat Pro app with a new version, Adobe Acrobat DC. The DC stands for Document Cloud.

Adobe has this to say when asked “What is Adobe Acrobat DC?”

Acrobat DC with Adobe Document Cloud is the complete PDF solution for working anywhere with your most important documents. All-new Acrobat DC is totally reimagined with a stunningly simple user experience that works consistently across desktop, mobile, and the web – including touch enabled devices.

Here are just a few things you can do with Acrobat DC:

  • Work anywhere. Create, edit, comment, and sign with the new Acrobat DC mobile app. And use Mobile Link to access recent files across desktop, web, and mobile.
  • Edit anything. Instantly edit PDFs and scanned documents as naturally as any other file with revolutionary new imaging technology.
  • Replace ink signatures. Send, track, manage, and store signed documents with a built-in e-signature service.
  • Protect important documents. Prevent others from copying or editing sensitive information in PDFs.
  • Eliminate overnight envelopes. Send, track, and confirm delivery of documents electronically.

That all sounds great, but if you work in education or with Lab machines, this whole cloud business is a right pain in the butt. Luckily Adobe have a Customisation “Wizard” that will edit the Acrobat DC installer package to remove all the cloudy stuff.

Using it though is not super straight forward so here are the steps I used to successfully create a package we were able to deploy.

First up, I will assume that you have already created an Adobe Creative Cloud package with the Creative Cloud Packager tool from Adobe and have a package sitting in a build folder like this:

Screen Shot 2015-09-11 at 5.34.13 pm

Step 1.

Get the Customisation wizard app -> http://www.adobe.com/devnet-docs/acrobatetk/tools/Wizard/index.html

Screen Shot 2015-09-11 at 5.27.32 pm

Step 2.

Launch the Acrobat Customization Wizard DC

Screen Shot 2015-09-11 at 4.57.57 pm

Now Locate the installer, this is the bit that tripped me up at first. I thought I would simply provide the package that CCP gave me. But no, you need to dig into the CCP package and find the actual Adobe Acrobat DC package

So click on the CCP package and choose show contents and locate the Adobe Acrobat DC package like this:

Screen Shot 2015-09-11 at 4.59.10 pm

Now click on the Locate Installer button and drag the Acrobat DC Installer.pkg to the dialog box

Fill out your serial number and choose the options you wish to customise like this:

Screen Shot 2015-09-11 at 5.00.17 pm

Hit ok and you will be asked where you wish to save the output package, I chose to put it on my Desktop so I can move it around easily later

Screen Shot 2015-09-11 at 5.08.27 pm

Hit save and the Wizard will go ahead and fix the installer package

Screen Shot 2015-09-11 at 5.02.09 pm

Step 3.

Now we just need to replace the old Acrobat DC installer in the CCP package with the new one thats on the Desktop, simply drag it into the Acrobat DC folder and choose to replace the existing item

Screen Shot 2015-09-11 at 5.04.02 pm

 

And thats it!

 

Now when you install the Adobe CC package Acrobat DC’s cloudy goodness will be disabled and your users who are behind authenticated proxies will rejoice in the fact there are no nagging prompts and first run screens

 

 

 

Packaging Adobe Captivate 9 with

NeoSpeech Voices and eLearning Assets

Recently I was tasked with creating an installer package for Adobe Captivate 9 that includes our VLA serial key so that it can be deployed via our management system or by simply giving a user access to the installer package so they can install themselves.

At first there was no problem, simply using the Creative Cloud Packager tool I was able to download and create an installer package for Adobe Captivate 9 that worked for us.

Screen Shot 2015-08-27 at 11.12.34 am

However we also wanted to install a couple of additional products to go with Captivate; the NeoSpeech Voices and eLearning Assets packages.

Screen Shot 2015-08-27 at 11.11.15 am

I was hoping that I could include these as offline media in the Creative Cloud Packages, but of course Adobe being Adobe, this was not to be.

Screen Shot 2015-08-27 at 11.25.03 am

So now I needed a way to install these extra packages silently and with our management system. But of course these being special Adobe installers and not native Apple Packages it made life hard.

I came across some sites that indicated it should be possible to run the installer from the command line with a silent flag like:

./NeoSpeech\ Voices/Install.app/Contents/MacOS/Install --mode=silent

However this was not working for me, the output from the command would indicate that the install was successful and exit 0 however no files were actually installed.

So after more digging I found an extra flag that might help.

--deploymentFile=

So the deployment file is a XML file that is created by the GUI installer and tells the installer application some important things like where to install the software, what language to use, and where the installer source files are. It also includes an "adobeCode"

It sees that when running the install binary with --mode=silent but without specifying a deployment.xml file that the installer runs in a no-op or no-operation mode which explains why no files were being installed the first time I tried this.

I was able to locate the deployment.xml file that the GUI installer creates and uses when it does its install via fs_usage for those interested it creates it in /private/tmp/<adobeCode>/deploy.xml

The AdobeCode can also be found in the Setup.xml file inside the payloads folder of the NeoSpeech Voices folder as shown here. It is inside the mediaSignature key.

Screen Shot 2015-08-27 at 11.52.30 am

Here is what the deployment XML should look like.
Note: I have redacted the AdobeCode and also changed the installSourcePath

So now we know what the deploy.xml file should look like we can easily create these at will without too much difficulty.

Now if we place a deploy.xml file inside the Install.app/Contents/MacOS folder we can call the installer from the command line with:

sudo ./Install --mode=silent --deploymentFile=deploy.xml

The installation completes successfully:

Starting Installer... In Silent mode.
Begin Adobe Setup
UI mode: Silent
100.00%
End Adobe Setup.
Installation successful.
Exiting Installer with Code: 0

Putting it all together

So now we know how to silently install the extra packages, we can create a single package that installs the Adobe Captivate package, then installs the eLearning Assets and Voices packages.

First we need to make sure we have created our two deploy.xml files one for the NeoSpeech Voices installer and one for eLearning Assets installer.

We will need to know installSourcePath in order for our deploy.xml file to work, but we can be a little bit clever with a postinstall script that will get that for us and do a find and replace on the deploy xml file before running the install.

So create the following deploy.xml files and modify the installSourcePath to REPLACEME if your following along they should look like this:
Note: Don’t forget to change the AdobeCode to the number from your Setup.xml file in the MediaSignature key

NeoSpeech Voices

eLearning Assets

Now I like to use Packages so create a new project.

Screen Shot 2015-08-27 at 12.33.45 pm

Give it a name and save it somewhere.

Screen Shot 2015-08-27 at 1.13.04 pm

Scripts

We can pretty much leave all the other tabs alone, you can set your own package identifier and versioning to your own needs.

Here we want to add our Packages and deploy.xml files as an Additional Resource, so drag the installer packages into the Additional Resources window (Reference Style: Relative to Project)

I chose to put the deploy.xml files inside the folders containing the installers. If you change this path you’ll need to remember this when you create your postflight script later on.

Screen Shot 2015-08-27 at 4.59.27 pm

Now we just need to create a post-installation script to tie everything together and get things installed!

Postinstall script

Our postinstall script should first install our Adobe Captivate 9 package, then it should install our NeoSpeech Voices and eLearning Assets packages.

We are going to be a little bit clever with our postinstall script as we need to dynamically update our deploy.xml files to contain the path to the install source directories for NeoSpeech and eLearning Assets.

We achieve this by using sed to search the deploy.xml file for REPLACEME and replace it with the working directory path to the folder. (This is a random path generated each time the installer package is opened by the Installer app, but we can get this by using the command `dirname $0`)

The postinstall script should look something like this:

Once you’ve created the postinstall script, just drag it onto the postinstall button in Packages and you are ready to save and build your package.

Remember to reach out and tell your Adobe Rep how much of a pain in the ass it is that you have to do all this extra work