NLVMUG Conference 2014 – March 6 – Den Bosch

After some smaller VMware Usergroup meetings in 2013, it’s now time for some serious business: on March 6, the NLVMUG Conference 2014 will take place in  Congrescentrum 1931 Den Bosch, The Netherlands.

The NLVMUG Conference 2014 is an annual full-day conference organized by the Dutch VMUG customer council and supported by the international vmug.com organization.  The NLVMUG Conference offers a diverse program:

  • 3 parallel tracks focusing on End User Computing, Software Defined Datacenter and Cloud Operations & Management;
  • Meet the experts;
  • The one and only Genius Bar;
  • Social media room;

…and of course an event full of VMUG members!

VMware’s CMO Brian Gammage is responsible for the keynote; other presenters are a.o. Duco Jaspers, Joep Piscaer, Jan Willem Lammers, Gabrie van Zanten, Eric Sloof, Arnim van Lieshout and yours truly.

Read on to learn about the session Arnim and I will run at the NLVMUG conference!

vCAC 6.0 UnLeashed – Arnim & Viktor

Arnim van Lieshout and Viktor van den Berg invite you to join their session about vCloud Automation Center 6.0 at the NLVMUG Conference 2014.  Arnim (senior consultant at VMware) and Viktor (senior consultant at PQR) will teach you everything about vCAC 6.0 you wanted to know but didn’t dare ask.

In our session you will learn about available vCAC features, including vCO integration, the Advanced Service Designer and IAAS based provisioning. Based on real-life requirements Arnim & Viktor will show you how to configure a multi -tenant self-service portal based on vCAC. You will learn how to use and configure multi-machine blueprints, and how to customize the vCAC workflows using vCenter Orchestrator. We will also demo the Advanced Service Designer.

We hope to meet you at March 6th in Den Bosch.

More information & registration

The full agenda for the NLVMUG Conference 2014 is provided on www.nlvmug.com. Registration is available here.

vCloud Automation Center – Creating State Change Workflows

One of the cool features of vCloud Automation Center (vCAC) is the ability to extend functionality by creating new or customizing existing workflows. VMware provides 6 state change workflows by default that you can customize using the vCloud Automation Center Designer. This tool includes a library of activities that serve as building blocks for your custom workflows. The most powerful activities are the ones that enable you to invoke external scripts, being either PowerShell or SSH or invoking vCenter Orchestrator (vCO) workflows.

The default state change workflows that are available are available to you are:

  • BuildingMachine
  • MachineProvisioned
  • MachineRegistered
  • UnprovisionMachine
  • MachineDisposing
  • MachineExpired

To create new workflows, you need a vCAC Development Kit license. The workflow generator plugin guides you in creating different types of workflows and helps you create the supporting configuration files for each type of workflow. Using the workflow generator wizard, you can create new workflows in Visual Studio for the above states plus the following states:

  • On
  • Off

For more information about vCAC workflows see also chapter 2 of the vCAC Extensibility Guide. If you’re interested in an example of creating a new state change workflow using the workflow generator in Visual Studio, my colleague Omer Kushmaro over at ElasticSkies.com has written an excellent article on how to create a new workflow that is triggered in the “On” state.

The Challenge

My challenge was a bit different as I wanted to be able to run a vCO workflow right after a user has requested a new machine, but before the machine was queued for approval. I already identified “Requested” as the state that I wanted to run my workflow in using the extensibility guide, but quickly found that this state is not available in the Visual Studio workflow generator. So I needed a smart workaround for this problem. The following procedure allows you to create a new state change workflow that can run in any available state without using Visual Studio.

Wait a minute, did you just say without the hassle of installing Visual Studio?

Yes, that’s correct. No Visual Studio required using this method!

NOTE: Before you continue, please note that this new method still requires a valid vCAC Development Kit license. Without this license you will be unable to install the workflow in the model manager.

The Solution

  1. Install the vCAC Designer if not already done so
  2. Start the vCAC Designer and load an existing workflow from the Model Manager using the Load button. In this example I will use the “WFStubMachineProvisioned” workflow.
  3. Press the Save button to export the workflow and save it to a file. I saved it as “WFStubMachineRequested.xaml”
  4. Open the saved file in the text editor of your choice and replace ALL occurrences of “MachineProvisioned” with “MachineRequested”
  5. Save the file again
  6. Use the CloudUtil.Exe utility located in the C:\Program Files (x86)\VMware\vCAC\Design Center folder to install the workflow in the Model Manager. Use the following command syntax, where –f specifies the filename and –n specifies the name of the workflow:
     CloudUtil.exe Workflow-Install -f WFStubMachineRequested.xaml –n WFStubMachineRequested
  7. Next step is to create a “WFStubMachineRequested.xml” configuration file. You can use the “ExternalWFStubs.xml” file that contains the configuration for the 6 standard available workflows as an example or copy the one I’ve created at the end of this article
  8. Specify the state that you want to run your workflow in by changing the value of the <MasterWFStateCriteria> tag. In my case I’ve changed this to “Requested”
  9. Specify the Custom Property that you want to use to enable the workflow in the <Property> tag. I’ve specified this as “ExternalWFStubs.MachineRequested”
  10. Specify the name of the workflow that you want to run using the “WorkflowName” argument in the <WorkflowArguments> tag. This name needs to be exactly the same as the name you used to import the workflow into the Model Manager using the CloudUtil.Exe tool earlier. I’ve used “WFStubMachineRequested”
  11. Specify the failure state that the machine will go into when something goes wrong in your workflow. Because nothing has been configured when hitting the “Requested” state yet, I’ve selected “Disposing” as the failure state to dispose the machine
  12. Save the file. I’ve saved the file as “WFStubMachineRequested.xml”
  13. Copy the file to C:\Program Files (x86)\VMware\vCAC\Server\ExternalWorkflows\xmldb folder on the vCAC server
  14. Restart the VMware vCloud Automation Center Service
  15. After the service is restarted, click the Load button in the vCAC Designer and verify that you can now load the new workflow
  16. Use the vCAC Designer to add custom code to the new workflow. I’ve added the “InvokeVcoWorkflow” activity to call a vCO workflow named “WFStubMachineRequested”
  17. Save the workflow in the Model Manager by pressing the Send button
  18. The only thing that’s left to do now is enabling the workflow by adding the custom property to a blueprint

You’re not limited to creating only one single state change workflow that is triggered in a specific state, but you can add additional workflows for the same state if required. Those workflows would typically be triggered simultaneously. If there is however a dependency between workflows that are triggered in the same state, you can leverage the priority attribute. Workflows are run in order of priority, meaning that the highest priority workflows run before the 2nd highest priority workflows, etc. You can find this attribute in the XML configuration file.

Example XML configuration file:

<?xml version="1.0" encoding="utf-8"?>
<plugins xmlns="http://dynamicops.com/schemas/externalwf">
  <plugin fullName="DynamicOps.External.RepositoryWorkflows.InvokeRepositoryWorkflow" priority="1">
    <MasterWFStateCriteria>Requested</MasterWFStateCriteria>
    <MasterWFTypeFullNameCriteria>*</MasterWFTypeFullNameCriteria>
    <ExecuteWhen>PreActivityExecution</ExecuteWhen>
    <AssemblyPath>[ExternalWorkflowsDirectory]\DynamicOps.External.RepositoryWorkflows.dll</AssemblyPath>
    <AllPropertiesExist>
      <Property>ExternalWFStubs.MachineRequested</Property>
    </AllPropertiesExist>
    <WorkflowArguments>
      <NameValue name="WorkflowName">WFStubMachineRequested</NameValue>
      <NameValue name="WorkflowTimeout">00:30:00</NameValue>
      <NameValue name="FailureState">Disposing</NameValue>
    </WorkflowArguments>
  </plugin>
</plugins>

vCloud Automation Center Part 2 – Preparing the Installation

Installing vCloud Automation Center (vCAC) requires some preparation. Although you can potentially install all components including the database on one single server, this isn’t an approach that’s suitable for production use. I would recommend installing all components on one single server for quick product evaluation purpose only. If you are planning to test-drive the product for production implementation, then I would recommend installing the product more production-like and separating duties on multiple boxes.

My recommendation for production-like testing includes at least the following three boxes:

  • Database server
    In a typical production scenario you’ll probably have dedicated database servers. Therefore I always recommend a dedicated database server for testing purposes as well. This will not get you into remote database problems on the production install, but will reveal early problems regarding the use of a remote database.
  • Web server
    For scalability reasons you might want to test webserver scalability using external load balancers. Installing the web components on a separate server allows you to do so during testing phase.
  • vCAC server
    For scalability reasons you might want to test installing a 2nd vCAC manager service and external load balancers. This server will also contain the Distributed Execution Manager (DEM) Orchestrator, DEM worker and required Agents. If you’re looking into scaling it out a bit more, you might want to install the DEM worker and agents on a separate box as well.

Preparing the Database Server

According to the VMware Product Interoperability Matrix, Only certain versions of Microsoft SQL Server 2008 and 2012 are supported. Make sure that you use a supported version of Microsoft SQL Server for your installation.

The database server has the following requirements:

  • TCP/IP protocol enabled for MSSQL Database Instance
    In order to connect to the database remotely, TCP/IP needs to be enabled as protocol for the SQL Server database instance hosting the vCAC database:
    • Open SQL Server Configuration Manager under Microsoft SQL Server –> Configuration Tools
    • In the tree pane, click SQL Server Network Configuration –> Protocols for MyInstanceName
    • In the results pane, verify that, under the Status column, Enabled appears next to the name of the TCP/IP protocol
    • In the tree pane, click SQL Native Client Configuration –> Client Protocols
    • In the results pane, verify that, under the Status column, Enabled appears next to the name of the TCP/IP protocol
    • In the tree pane, click SQL Server Services
    • In the results pane, right-click SQL Server (MyInstanceName), and then click Restart
  • Microsoft Distributed Transaction Coordinator Service (MS DTC) enabled
    This service is responsible for coordinating transactions that span multiple systems. To enable this service use the following procedure:
    • Open Component Services from Administrative Tools
    • In the tree pane, click Component Services –> Computers –> My Computer –> Distributed Transaction Coordinator
    • In the results pane, right click on Local DTC and select Properties
    • Select the Security tab
    • Select Network DTC Access, Allow Remote Clients, Allow Remote Administration, Allow InBound, and Allow OutBound (Leave everything else as is)
    • Select OK
  • No firewalls between Database Server and the Web server or vCAC Server, or ports opened as described in Firewall Configuration
    Both the Web Server and the vCAC Server need communication to the database. Besides opening the firewall for SQL server traffic (by default port 1433), you must also enable Microsoft Distributed Transaction Coordinator Service (MS DTC) communication between all servers in the deployment. More detailed instructions for enabling DTC through a firewall can be found in KB 250367
    Apart from 3rd party firewalls, don’t forget the Windows Firewall on the server. You need to disable or configure that as well ;-)
    • If you’re using SQL Server Express, the SQL Server Browser service must be running
      Make sure that you set the startup option for the SQL Server Browser service to automatic and start the service

    If you want to install the SQL Server Management Studio, you’ll also need to add the .Net Framework 3.5.1 feature using Server Manager.

    Preparing the Web Server

    The web server has the following requirements:

    • Microsoft .NET Framework 4.5 needs to be installed
      .Net Framework is available at http://msdn.microsoft.com/en-us/vstudio/aa496123
      Make sure that you install .Net framework before installing IIS. If you fail to do so, .Net is not registered properly with IIS. To fix that use the following procedure:
      • Open a command line on the server as administrator
      • Change directories into your .Net 4.5 directory (most likely C:\Windows\Microsoft.NET\Framework\v4.0.30319)
      • Type aspnet_regiis.exe -i and press enter
      • Type iisreset and press enter
    • IIS Server Role installed
      Currently only Microsoft Internet Information Services (IIS) 7.5 is supported. IIS Server role must be installed with the following Role Services using Server Manager (More information on installing IIS can be found here):
      • Static Content
      • Default Document
      • HTTP Redirection (required for vCAC Self-Service Portal)
      • ASP.NET
      • ISAPI Extensions
      • ISAPI Filter
      • Windows Authentication
    • IIS Authentication configuration
      After installing IIS, you’ll need to do some configuration within IIS:
      • Open Internet Information Services (IIS) Manager
      • In the tree pane, expand the <machine name>, Sites, to reach the Default Web Site
      • In the results pane, double click on Authentication
      • Disable Anonymous Authentication
      • Enable Windows Authentication
      • Highlight Windows Authentication and click on Providers under Actions on the right hand side
        • Remove Negotiate from the Enabled Providers list
        • Add Negotiate back into the list, making sure it is the first provider in the list. (This is necessary due to a bug in IIS)
        • Both Negotiate and NTLM providers should be enabled
      • Open Advanced Settings (above Providers)
        • In the drop down box for Extended Protection change it to Accept and then change it back to Off again
        • Kernel-Mode Authentication should be enabled
        • Click OK. (This is necessary due to a bug in IIS)
    • Windows Process Activation Service installed
      the following procedure to add the Windows Process Activation Service feature:
      • Open Server Manager
      • Expand the Windows Process Activation Service feature
        • Select Process Model, .Net Environment, Configuration APIs
      • Expand the .Net Framework 3.5.1 Features
        • Select both .Net Framework 3.5.1 and WCF Activation
        • Make sure that both HTTP Activation and Non-HTTP Activation is selected
      • Complete the installation of the Windows Features
    • Microsoft Distributed Transaction Coordinator Service (MS DTC) enabled
      This service is responsible for coordinating transactions that span multiple systems. For detailed instructions on enabling MS DTC see the Database Server section previously
    • No firewalls between Database Server and the Web server or vCAC Server, or ports opened as described in Firewall Configuration
      Besides opening the firewall for SQL server traffic (by default port 1433), you must also enable Microsoft Distributed Transaction Coordinator Service (MS DTC) communication. For more details see the Database Server section previously
    • Log on as a batch job right
      This right is required for the domain user that you are planning to use as the IIS application pool identity for the Model Manager Web Service. I would recommend using a separate service account. To add the Log on as a batch job right:
      • Open Local Security Policy from Administrative Tools
      • In the tree pane, expand Local Policies, then select User Rights Assignment
      • Double-click Log on as a batch job
      • Click Add User or Group
      • Add the user that will be used to run the IIS Application pool identity for the Model Manager Web Service
      • Click OK
    • Log on as a service right
      The domain user that you are planning to use as the IIS application pool identity for the Model Manager Web Service requires the Log on as a service right
      • Open Local Security Policy from Administrative Tools
      • In the tree pane, expand Local Policies, then select User Rights Assignment
      • Double-click Log on as a service
      • Click Add User or Group
      • Add the user that will be used to run the IIS Application pool identity for the Model Manager Web Service
      • Click OK

    Preparing the vCAC Server

    In my installation setup, the vCAC server will be hosting both the vCAC manager service as well as the DEM Orchestrator service. See the vCAC installation guide for specific server requirements if you want to separate those services on different boxes. The vCAC server has the following requirements:

    • Must be installed on Windows Server 2008 R2 SP1
      Currently only Windows Server 2008 R2 SP1 is supported for vCAC installations
    • Windows PowerShell Version 2.0
      PowerShell 2.0 gets automatically installed with Windows 2008 R2
    • Server should be joined to a domain to allow for use of active directory users
    • Microsoft .NET Framework 4.5 needs to be installed
      .Net Framework is available at http://msdn.microsoft.com/en-us/vstudio/aa496123
      If you’re installing all components on a single box, make sure that you install .Net framework before installing IIS. For more information see the Web Server section previously
    • IIS Server Role installed
      IIS Server role must be installed prior to installing the Manager Service as it uses IIS to present itself. IIS can be installed with the default options
    • Secondary Logon service needs to be running
      Open Services.msc and start the Secondary Logon service. This only needs to be running during the installation process
      Make sure that you also set the startup type of the Secondary Logon service to Automatic to keep it running persistently across reboots
    • Microsoft Distributed Transaction Coordinator Service (MS DTC) enabled
      This service is responsible for coordinating transactions that span multiple systems. For detailed instructions on enabling MS DTC see the Database Server section previously
    • No firewalls between Database Server and the Web server or vCAC Server, or ports opened as described in Firewall Configuration
      Besides opening the firewall for SQL server traffic (by default port 1433), you must also enable Microsoft Distributed Transaction Coordinator Service (MS DTC) communication. For more details see the Database Server section previously
    • Manager Service’s time should match the database’s time
      As with many other VMware products Time is very crucial. Therefore make sure that you configure all servers using the same single time source

    To ensure that you have satisfied all prerequisites, run the vCAC Prerequisite Checker tool before installing any of the vCAC components. Installing the vCAC components will be discussed in the next Part.

    vCloud Automation Center Part 1 – Components Overview

    Last year VMware acquired DynamicOps and their product called DynamicOps Cloud Automation Center (DCAC). DynamicOps originally started as part of the Credit Suisse’s Global Research and Development Group in 2005 to help the company address the operational and governance challenges of rolling out virtualization technology. After VMware acquired DynamicOps, the product has been rebranded to vCloud Automation Center (vCAC) and has recently been updated with the release of version 5.2.

    Since the release of vCAC I’ve always wanted to find time to look at this new product in our portfolio, but I haven’t managed to do so until recently. vCAC allows internal IT departments to create provisioning blueprints and provision these blueprints to either VMware vSphere, Microsoft Hyper-V, External cloud providers like AmazonEC2 and even physical machines using one single provisioning process. Apart from the out of the box functionality vCAC offers great extensibility to support end-to-end provisioning. Using so called workflow stubs, allow you to hook into the provisioning process at different stages to insert your customizations. Extensibility options include writing your own workflows in vCAC Designer and/or calling out to external systems like vCO or running PowerShell scripts.

    Enough said, first things first. Let’s have a look at the architecture. The vCAC environment can be divided into 3 main parts:

    • The vCAC core components
    • The integration and distributed execution components
    • The provisioning infrastructure.

    vCAC core

    vCAC core part contains the following sub-components:

    Web Server

    There are three webserver services that can either be installed together on the same webserver or installed separately, distributed across multiple webservers. The vCAC web services are designed for Microsoft IIS and therefore need to be installed on a Microsoft IIS Web Server.

    • Administration Portal Web Site
      The Administration portal provides the administration user interface to vCAC and communicates directly with the Model Manager. The portal can be reached via https://<webserver_name>/vCAC.
    • Reports Web Site
      The reports web site provides access to vCAC reports, available through a link in the vCAC administration console or at via https://<webserver_name>/vCACReports.
    • Model Manager Web Services
      The Model Manager manages core vCAC and custom models. The Model Manager provides services and utilities for persisting, versioning, securing and distributing the different elements of the model and for communicating with the vCAC portal website and Distributed Execution Managers (DEMs).

    vCAC server

    The vCloud Automation Center service (commonly called the Manager Service) coordinates communication between vCAC agents, the vCAC database, Active Directory and SMTP. The Manager Service communicates with the portal website through the Model Manager. The system hosting the Manager Service is typically called the vCAC Server.

    SQL server

    vCAC requires a Microsoft SQL Server database to maintain information about the machines it manages and its own elements and policies. This database is typically created during vCAC installation, but can also be created manually before the vCAC installation.

    Integration and distributed execution

    The components in this part are the interface between the vCAC core components and the provisioning infrastructure.

    Distributed Execution Manager

    The Distributed Execution Manager (DEM) comes in two flavors. Each DEM instance can perform either as an Orchestrator or as a Worker.

    • DEM Orchestrator
      The DEM orchestrator works as a manager responsible for monitoring the DEM workers and responsible for scheduling workflows on those DEM workers. By preprocessing the workflows, it decides which worker needs to pick up a certain workflow to execute, as multiple workers can have different functionalities or skills as it’s called inside vCAC. If a worker loses connection, the DEM orchestrator puts its workflows back in the queue for another DEM worker to pick up.
    • DEM Worker
      The DEM worker is responsible for executing workflows

    vCAC Agent

    vCAC agents are used to interact with external system. There are different types of agents, each having specific functions, like agents that interact with hypervisors, agents that allow vCAC to run scripts in guests as part of the provisioning process, agents that interact with virtual desktop solutions or WMI agents that enable vCAC to collect data from Windows machines.

    Provisioning Infrastructure

    The provisioning Infrastructure is the environment that you want to provision blueprints to. This can be either one or a combination of the following:

    • Hyperisors:
      • VMware ESX(i)
      • Windows Hyper-V
      • RedHat KVM
      • Citrix XenServer
    • Hypervisor Management
      • VMware vCenter Server
      • Microsoft SCVMM
    • Hardware Management:
      • Dell iDRAC
      • HP iLO
      • Cisco UCS Manager
    • Cloud:
      • VMware vCloud Director
      • Amazon Web Services Elastic Cloud Computing (EC2)

    It might sound massively complex when you’re looking into vCAC for the first time, but as soon as you understand the different components and where they fit, it’s getting a lot simpler. I’ve created a small conceptual diagram showing the above-mentioned components

    vCenter Single Sign On (SSO) is an authentication proxy

    I’ve been in numerous discussions regarding vCenter Single Sign On (SSO) where mostly people didn’t fully understand the functionality of SSO.

    Let me make this statement:
    SSO is an authentication proxy.

    The function of SSO is to act as a single proxy in your VMware environment to verify user credentials against security providers. When configuring SSO, you configure SSO with every security provider that needs to authenticate users. Besides the proxy function you can also define internal SSO users. These users are located inside the SSO database. Whenever you are successfully authenticated by SSO, you receive a security token that let’s you connect to other vSphere components without providing your password again, hence Single Sign On.

    There are three kind of users that can be used:

    1. Internal system users, referenced with the @system-domain domain
    2. Local users. These are local Windows users defined on the SSO server
    3. LDAP users. These users are located in external LDAP databases, like openLDAP, Active Directory.

    Let me make another statement:
    You do not set permissions for resources in SSO

    Permissions are still set on the resources (for instance vCenter Server). Nothing changed for that. So if a user needs to be granted permission to vCenter, you add the user on the vCenter server and assign permissions as you used to do before. Additionally you are now able to add permissions to @system-domain users.

    Let me finally make this last statement:
    You can have multiple SSO installations in your environment

    The fact that it’s called Single Sign On does not necessarily mean that you can only deploy one instance of SSO in your complete environment. Every vCenter installation requires an SSO instance, but there’s nothing wrong with installing a separate SSO instance for every vCenter Server in your environment. As a matter of fact, installing multiple SSO instances makes the environment less complex, does not create dependencies between vCenter Server environments and therefore simplifies future upgrades.

    Move/Replace vCloud Director NFS Transfer Server Storage

    In a multi-cell vCloud Director installation, all cells need access to a shared spooling area, also known as NFS transfer server storage. When you need to move or replace the NFS transfer server storage because the current presented NFS share is too small or maybe it got lost because of a crash, you can simply provide a new share to the vCloud Director cells.

    The following procedure shows you how to replace the NFS transfer server storage:

    Create NFS share

    First you need to export a share on the new NFS server. The procedure might be different depending on the type of NFS server you’re using. RedHat version 5.7 has been used in the following procedure.

    1. Create a directory to export:

    mkdir /nfs/vCD-Transfer

    2. Export the NFS directory. Add the following line to /etc/exports:

    /nfs/vCD-Transfer <accesslist>   (rw,no_root_squash)

    Note: Replace <accesslist> with the ip addresses of your VCD cells or allow a specific network like 10.1.1.0/24

    3. Restart NFS service:

    service nfs restart

    Mount NFS share on VCD cells

    When the new NFS share is ready

    1. Unmount the current NFS share:

    umount /opt/vmware/vcloud-director/data/transfer

    2. Modify /etc/fstab and make sure the following line is present and matching your NFS server IP and directory:

    <nfsip>:/nfs/vCD-Transfer /opt/vmware/vcloud-director/data/transfer nfs rw,soft,_netdev 0 0

    Note: Replace <nfsip> with the ip addresses of your NFS server

    3. Mount the NFS share:

    mount -a

    4. Verify that the permissions on the transfer directory are set to 750 (drwxr-x—). If not change it:

    chmod 750 /opt/vmware/vcloud-director/data/transfer

    5. Verify that both the user and group on the transfer folder are set to vcloud. If not change it:

    chown -R vcloud:vcloud /opt/vmware/vcloud-director/data/transfer

    6. Restart VCD cell:

    service vmware-vcd restart

    Note 1: Be aware that (in a production environment) tasks could be active on the cell. Restarting the cell this way will break any running tasks. You might want to quiesce the cell first as described in http://kb.vmware.com/kb/2034994

    Note 2: You’ll notice a warning in the cell.log log file indicating that the cell is unable to verify that the other cells share the same spooling area. This is normal at this stage as the other cells haven’t been updated with the new NFS share yet. Just make sure that you update all cells. When a VCD cell starts it writes a marker file onto the spooling area to verify that the spooling area is writable and checks all other marker files to verify that it’s sharing the spooling area with all other cells.

    Replacing VMware vCenter 5.1 Certificates Made Easy

    The release of vCenter 5.1 added more components and therefore more certificates into the mix. Using CA signed certificates increase security, but the process of updating these certificates is currently very tedious and error prone.

    VMware announced the general availability of vCenter Certificate Automation Tool 1.0. This tool provides an automated mechanism to replace certificates in the following components of the vCenter management platform:

    1. vCenter Server
    2. vCenter Single Sign On
    3. vCenter Inventory Service
    4. vSphere Web Client
    5. vCenter Log Browser
    6. vCenter Orchestrator (VCO)
    7. vSphere Update Manager (VUM)

    The corresponding KB article can be found at: http://kb.vmware.com/kb/2041600

    This tool is fully supported by VMware as well.

    Versioning and Renaming Elements in vCenter Orchestrator

    During the development cycle of a workflow, your workflow can change drastically whenever you add new functionalities. Therefore a good practice is saving different versions of your workflow. One way to achieve this is by utilizing the built-in versioning functionality. As a best practice, always increase the version number of your workflow when making changes and don’t forget to write a decent comment describing the last changes. Although this enables you to revert back to the saved versions using the “version history”, it doesn’t allow you to have a peak at that version. Therefore I often find it more convenient keeping a copy of the workflow or action for reference and backup. This way you can easily open a saved version without impacting the current version and risking the loss of any changes.

    Note: Pressing the “Revert” button in the editor or History Version Inspector, does NOT prompt you, but reverts to the last saved or the selected version state instantly.

    Workflow Elements

    Making a copy of the workflow or action for backup is easy. Right-click on the item and select “Duplicate …”. But be careful when duplicating workflows or actions; every element in vCenter Orchestrator is identified by an internal ID. This allows you to have multiple items with the same name, even in the same folder. Because of this ID, you can’t simply replace a workflow by putting a copy in the original folder using the same name.

    When a workflow is called from within another workflow, the linked workflow is referenced by its ID. Renaming the linked workflow doesn’t break the link in the parent workflow. When you want to replace the linked workflow with another copy or version, you have to change the linked workflow element in the parent workflow to point to the replacement. Suppose that you have a main workflow called ‘wfMain’ referencing a sub-workflow ‘wfSub’. Figure 1 shows a graphical representation of this workflow.

    Figure 1

    When you want to create a duplicate of ‘wfSub’ to develop new functionality without affecting users using the current workflow, you create a duplicate and call that ‘wfSub_v2′. After adding new functionality you want your users to use this new version of the workflow. You rename ‘wfSub’ to ‘wfSub_v1′ and then rename ‘wfSub_v2′ to ‘wfSub’. When viewing the main workflow, you’ll see that the name of the referenced sub-workflow has changed to ‘wfSub_v1′ according to the rename operations. This is because the sub-workflow is linked by ID. What happened under the surface in vCO is that the new copy got a new internal ID and is not linked to the parent workflow ‘wfMain’. Figure 2 shows that the referenced workflow has been changed. Note that the name of the workflow element didn’t change.

    Figure 2

    When you have multiple linked workflows, replacing workflows can be a tedious task as there is no direct reference back or back-link to any parent workflow. The only option is using the search feature of the client to search for references as shown in Figure 3. Therefore avoid having a production version and a ‘development’ version of the same workflow or action on one server.

    Figure 3

    A best practice is running a separate development instance of vCO. This way you can develop new functionalities on the development server and, when finished, synchronize the new version to the production instance. By synchronizing content between vCO instances, the synchronized items on both the source and destination instance have the same internal ID. This saves you from replacing and relinking workflows. But be careful that you keep developing in the same ‘master’ workflow and only make duplicates for backup.

    Tip: Make a duplicate of your workflow or action after every significant change. I always append the version number to the element’s name for easy reference.

    Action Elements

    Now let’s do the same operation on an action element. Let’s add an action element to the ‘wfMain’ workflow called ‘getMyObject’. Figure 4 shows the new workflow.

    Figure 4

    Now when you rename the action called ‘getMyObject’ to ‘getMyObject_v2′, you’ll notice that the reference of the action element didn’t changed. In fact the workflow is unable to run because it can’t find the referenced element ‘getMyObject’. When running a workflow validation, you’ll receive an error stating that a referenced element could not be found as shown in Figure 5.

    Figure 5

    This is because action elements are referenced by path (their location in the action library tree). Opening the action element and viewing the ‘Scripting’ tab as shown in Figure 6 can validate this.

    Figure 6

    Now create a new copy of the action element and give it the original name (‘getMyObject’). When you run the validation process again, everything is fixed.

    Unlike workflow elements, there is no option to relink an action element to another action. So you’ll have to replace the action element with a new one. This also means that you have to bind all input and output parameters again as well as setting the exception binding if any.

    Conclusion

    So before you start renaming elements, think twice. As a best practice never rename elements. If you want to keep some versions of an element as a backup while working on a solution, remember that you create a backup of the current element and keep editing in the original version and not the copy. Also make sure that you update the version number after every change, because this version number is the only parameter being evaluated by vCO when determining the elements that need to be copied to the remote server during synchronization.

    vCenter Orchestrator Configuration Element Attribute Values Missing After Import.

    When importing a package on a vCenter Orchestrator (vCO) server in my lab I noticed that the values of the attributes inside the Configuration Elements (CE) were missing. At first I thought that it was because of the different version of vCO. (I exported the package as a backup from a vCO 4.2.1 and imported it into a 5.1 vCO server), but I witnessed the same behavior when importing the package to vCO 4.2.1.

    When searching the web for this phenomenon I couldn’t find any information describing this behavior until I caught eye on a blog post on the VMware vCenter Orchestrator blog: http://blogs.vmware.com/orchestrator/2012/02/configuration-elements-revisited.html

    Somewhat hidden in the bottom text of the article the difference between exporting a single Configuration Element and exporting a Configuration Element as part of a package is explained:

    Nevertheless there is a small difference with exporting a single configuration element, the difference is that in that case the values of the attributes are not exported! In another words, if you import a package containing a configuration element into another vCO, the configuration element attribute values are not set.

    I was not aware of this and it kind of makes sense if you use Configuration Elements solely to hold vCO server specific information. But in my case, Configuration Elements have been used to store global information about the environment.

    Also I’ve been saving the exported package as an extra kind of backup, next to the database backup, but actually being useless for the CE’s as I’ve discovered now.

    The Configuration Element Content

    Let’s have a closer look at the exported packages. First have a look at the Configuration Element when exported as a single configuration item. The CE is exported as an XML file. From the XML you see that the exported CE is called ‘CE1’ with three attributes called ‘att0’, ‘att1’ and ‘att2’. The <value> tag contains the value of each of the attributes. For instance attribute ‘att0’ is of type ‘string’ and has a value of “This is a Test”.

    <?xml version="1.0" encoding="ISO-8859-1" standalone="yes"?>
    <config-element id="828080808080808080808080808080808180808001359849118679aebf2a6a5a5"  version="0.0.0" >
    <display-name><![CDATA[CE1]]></display-name>
    <atts>
    <att name='att0' type='string' read-only='false' ><value encoded='n'><![CDATA[This is a Test]]></value>
    <description><![CDATA[att0_desc]]></description>
    </att>
    <att name='att1' type='boolean' read-only='false' ><description><![CDATA[att1_desc]]></description>
    </att>
    <att name='att2' type='number' read-only='false' ><value encoded='n'><![CDATA[123.0]]></value>
    <description><![CDATA[att2_desc]]></description>
    </att>
    </atts>
    </config-element>

    When we look at the same Configuration Element when exported as part of an Orchestrator package, you see that the <value> tag is omitted from the XML content.

    <?xml version="1.0" encoding="ISO-8859-1" standalone="yes"?>
    <config-element id="828080808080808080808080808080808180808001359849118679aebf2a6a5a5"  version="0.0.0" >
    <display-name><![CDATA[CE1]]></display-name>
    <atts>
    <att name='att0' type='string' read-only='false' ><description><![CDATA[att0_desc]]></description>
    </att>
    <att name='att1' type='boolean' read-only='false' ><description><![CDATA[att1_desc]]></description>
    </att>
    <att name='att2' type='number' read-only='false' ><description><![CDATA[att2_desc]]></description>
    </att>
    </atts>
    </config-element>

    Work Arounds

    To work around this behavior of vCenter Orchestrator, I found two options:

    1. Export all Configuration Elements independently
    2. Synchronize the package to the server using the vCO synchronization option.

    Option1: Export

    When exporting a single Configuration Element, the values are exported as shown earlier in this post. Export all Configuration Elements separately as a single exported configuration item. Use this option when you want to create a backup or when there’s no network connectivity to the destination vCO server.

    Option2: Synchronize

    Use this option when you have network connectivity to your destination vCO server. This is the easiest and recommended option to copy content from one server to the other.

    PHD Virtual Backup 6.1

    PHD Virtual has released a new version of their award-winning data protection software PHD Virtual Backup 6.1. This version adds some major new functionality and enhancements, which include:

    • Instant Recovery for Full/Incremental Backup Mode
    • Rollback Recovery
    • Reporting Enhancements
    • Job Copy

    Instant Recovery for Full/Incremental Backup Mode

    With PHD Virtual Backup version 6.1, PHD Virtual brings Instant recovery to Full/Incremental Backup Mode. This functionality has been introduced in version 6.0, but was only available with their Virtual Full backup mode. In version 6.1, PHD Virtual extends the power of PHD Instant Recovery to include Full/Incremental backup mode as well. Instant Recovery allows you to start a VM from its backup location and drastically reduce RTO. After the VM is started, users can either leverage VMware Storage vMotion or use PHD Motion to move the VM’s to production storage.

    Rollback Recovery

    I think that Rollback recovery is a major new functionality. It allows you to restore a backup of a VM very quickly. Instead of doing a full restore, rollback recovery allows you to only restore the changed blocks (using VMware Changed Block Tracking (CBT) information) over the existing VM, effectively rolling back your VM to a previous point in time. This will enhance your RTO because there is generally only about 1-5% of changes occurring in a VM each day.

    You might think why you will need Rollback Recovery when you have Instant Recovery available and are able to start a VM from the backup location straight away without waiting for any restore of vmdk files at all. The major difference is that backup files are typically stored on cheaper disks and hence slower performing storage. This kind of storage might not meet your application’s requirements when using Instant Recovery. Using Rollback Recovery you are able to restore the VM to the application’s production storage right away with only minimal downtime (depending on the number of changes to the VM).

    Reporting Enhancements

    Reporting enhancement in version 6.1 include:

    • Export Job History
      This allows users to export the job history table from PHD Virtual Backup v6.1 UI to Excel in CSV format.
    • Data Protection Risk Management Report
      Gain visibility into your data protection integrity by viewing a list of all VM’s in the environment, along with information on their sizes, virtual disks and the date of the last successful backup.

    Job Copy

    Spend less time creating and scheduling backup jobs with Job Copy. While it might seem only a small enhancement, the ability to copy a job will greatly accelerate the backup job creation process. There’s nothing more annoying than having to create a job from scratch every time.

    What’s New in Version 6.1 Video

    Watch this video to see a demo of the new functionality in PHD Backup 6.1

    To find out more about PHD Virtual Backup go to http://phdvirtual.com