Backup Citrix VD templates with OVF Tool utility & PowerCLI

This script will automate the process of backing up your Citrix VD templates. As a matter of fact, you can use it for backing up any VM in your vCenter server.

The script leverages the OVF Tool utility. This tool can be downloaded from and should be installed on the server you’re scheduling the script on. For more info:

The script will run the OVF Tool utility for each Template or VM defined in the CSV file. It will then zip the log files generated by the OVF Tool utility and email them to the email addresses defined in the variables section.

NOTE – Each time the script runs the OVF data is overwritten. Export job logs for previous runs are archived in the Log-Archive folder. The script also creates a console log file in the root of the export folder.

Continue reading

Installing CA signed certificates on Dell EMC Unity arrays

I couldn’t find a lot of documentation on installing CA signed certificates on Dell EMC Unity arrays, so here are the steps:

1. Download and install openssl on your desktop/laptop.

2. Generate a cfg file using a txt editor, save it under c:\temp\unity1 folder as unity1_cfg.txt.

Example cfg file (modify for your environment):

[ req ]
distinguished_name = req_distinguished_name
encrypt_key = no
prompt = no
string_mask = nombstr
req_extensions = v3_req
[ v3_req ]
basicConstraints = CA:false
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = DNS:unity1.myitblog.local
[ req_distinguished_name ]
countryName = GB
stateOrProvinceName = State
localityName = London
0.organizationName =
organizationalUnitName = IT
commonName = unity1.myitblog.local

Continue reading

Microsoft Failover Cluster node not sending out Gratuitous ARP requests after a failover

This was a particularly odd issue which I had never experienced before so I thought it’s worth blogging about it.

During a normal MS Failover Cluster failover operation, the node calming the cluster roles sends out a GARP request to notify the networking infrastructure of the MAC address change. The Layer 3 switch / router then updates the MAC address in the ARP table and packets are routed to the node which claimed the cluster roles.  Recently I found myself troubleshooting a MS Failover cluster deployment which wasn’t behaving quite in this manner.

Some background info:

  • For the sake of this blog post lets call the 2 nodes A and B.
  • The nodes are running Server 2016, SQL 2012 and Microsoft Failover Cluster services.
  • Each node has 2 NICs, one for the client and management network, and one for the heartbeat network.
  • The cluster consists of 3 Network resource; a cluster IP address and 2 SQL instance addresses which float between the 2 nodes depending on which one is active.
  • All 3 IP addresses are in the same VLAN.
  • Running continues ping to all 3 IP addresses during failover tests.

Continue reading

Using the pktcap-uw tool to capture VM traffic

SSH on to the ESXi host the VM resides on and run command below. Replace VM-NAME with your VM’s name.

esxcli network vm list | grep -i VM-NAME


1122341 VM-NAME dvportgroup-202593

Copy digits at beginning of output – 1122341 into command below and run command.

esxcli network vm port list -w 1122341


Port ID: 33554507
vSwitch: VDS or VSS Name
Portgroup: dvportgroup-202593
DVPort ID: 323
MAC Address: 00:50:56:91:72:03
IP Address:
Team Uplink: vmnic1
Uplink Port ID: 33554434
Active Filters:

Copy Port ID at the beginning of output – 33554507 into capture commands below and run captures.


pktcap-uw --switchport 33554507 -o /tmp/VM-NAME-outbound.pcap


pktcap-uw --switchport 33554507 --dir 1 -o /tmp/VM-NAME.pcap

Install WinSCP, connect to host over SCP, download pcap file from tmp folder, analyze in Wireshark

More info —

Replacing self-signed External PSC certificates with a Microsoft CA signed certificate

The goal of this procedure is to replace self-signed External PSC certificates with a Microsoft CA signed certificate.

Helpful blog posts and articles:


2 External PSC servers (v6.5) behind a load balancer

PSC1 – psc1.myitblog.local
PSC2 – psc2.myitblog.local
VIP – vpsc.myitblog.local

Continue reading

Change VDS Port Group Load Balancing Policy with PowerCLI

#Get current load balancing policy
$VDS = VDS name
$pg = Portgroup name
Get-VDswitch -Name $VDS | Get-VDPortgroup $pg | Get-VDUplinkTeamingPolicy

#Set new load balancing policy

#Set Route based on IP hash
Get-VDswitch -Name $VDS | Get-VDPortgroup $pg | Get-VDUplinkTeamingPolicy | Set-VDUplinkTeamingPolicy -LoadBalancingPolicy LoadBalanceIP

#Set Route based on source MAC hash
Get-VDswitch -Name $VDS | Get-VDPortgroup $pg | Get-VDUplinkTeamingPolicy | Set-VDUplinkTeamingPolicy -LoadBalancingPolicy LoadBalanceSrcMac

#Set Route based on originating virtual port
Get-VDswitch -Name $VDS | Get-VDPortgroup $pg | Get-VDUplinkTeamingPolicy | Set-VDUplinkTeamingPolicy -LoadBalancingPolicy LoadBalanceSrcId

#Set Use explicit failover order
Get-VDswitch -Name $VDS | Get-VDPortgroup $pg | Get-VDUplinkTeamingPolicy | Set-VDUplinkTeamingPolicy -LoadBalancingPolicy ExplicitFailover

#Set Route based on physcial NIC load
Get-VDswitch -Name $VDS | Get-VDPortgroup $pg | Get-VDUplinkTeamingPolicy | Set-VDUplinkTeamingPolicy -LoadBalancingPolicy LoadBalanceLoadBased

#Remove $pg to apply new load balancing policy to all portgroups on the same VDS
Get-VDswitch -Name $VDS | Get-VDPortgroup | Get-VDUplinkTeamingPolicy | Set-VDUplinkTeamingPolicy -LoadBalancingPolicy LoadBalanceIP

For other Set-VDUplinkTeamingPolicy cmdlet parameters refer to the following documentation:

Copy Veeam backups to AWS S3 using PowerShell

Recently I was looking for a way to move monthly Veeam backups into AWS S3 without having the client invest in Veeam Cloud Connect or AWS Storage Gateway.

So I started by re-configuring the monthly Veeam backup job to produce a full backup instead of the usual incremental, this made each .vbk file (i.e. Full backup file) generated at the end of the job independent of the others, and allowed me to upload them to S3 and reduce my retention to one.

The goal was to automate this process and avoid manual monthly uploads. This naturally called for some PowerShell scripting.

Reducing the retention to one meant that each month the .vbk file would be overwritten, so I had to make sure the uploaded .vbk file in S3 was good. The only way I could know if the file was good or not was by generating a MD5 hash of the .vbk file in the local repository and comparing the hash with the S3 object ETag once the .vbk was uploaded. Unfortunately this approach didn’t work because the Write-S3Object cmdlet uses the Multipart API when uploading to S3 and stores files in chunks, so the ETag you get back from S3 is not for the whole file but for the first chunk. My workaround was to simply generate the MD5 hash, upload the file, then download the file back, generate another hash and compare it to the first hash.

Anyways here is the script, just configure it to run after the job, using the Advanced Settings\Scripts tab of your monthly backup job.

Also note that I’ve added basic logging and alerting functionality to the script.

I hope someone out there finds this script useful!!!

#Written by Cengiz Ulusahin 25/05/17
#This script runs after each Veeam monthly backup job and copies the latest full backup file into Amazon S3
#It offers a MD5 check on uploaded files, basic logging and alerting functionality

#Function for logging
Function Write-Log
Param ([string]$logstring)
$stamp = (Get-Date).toString("yyyy/MM/dd HH:mm:ss")
$line = "$stamp $logstring"
Add-content $logfile -value $line

#Function for sending email alerts
Function Send-Alert
Send-MailMessage -SmtpServer "Enter IP address" -To "Enter To email address" -From "Enter From Email address" -Subject "Company Monthly AWS S3 Copy Job" -Body "Company monthly backup copy job to AWS S3 has failed or completed with errors. Please alert the backup team immediately by opening a ticket. Copy job log: $logfile."

#Function to Pause script, use for debugging
Function Pause
Read-Host 'Press any key to continue…' | Out-Null

#Define static variables
$AccessKey = "Enter AWS Access Key"
$SecretKey = "Enter AWS Secret Key"
$bucketname = "Enter S3 bucket name"
$root = "Enter the path of your monthly backup folder (e.g. D:\Monthly-Backups)"
$temproot = "Enter a path for the temporary vbk file (e.g. D:\Monthly-Backups\S3\Temp)"
$logfile = "Enter a path for the script log file (e.g. D:\Monthly-Backups\S3\Company-S3-Copy.log)"

#Get latest vbk file
$vbkfile = Get-ChildItem -Path $root -Filter $filter | Sort-Object LastAccessTime -Descending | Select-Object -First 1
$vbkfilefullpath = join-path -path $root -childpath $vbkfile

#Generate hash for vbk file
$hash = (Get-FileHash $vbkfilefullpath -Algorithm MD5).hash

#Upload vbk file to S3 bucket
Write-S3Object -BucketName $bucketname -CannedACLName bucket-owner-full-control -File $vbkfilefullpath -Key $vbkfile -AccessKey $AccessKey -SecretKey $SecretKey

#Get vbk file copied to S3 bucket
$vbks3 = (Get-S3Object -BucketName $bucketname -Key $vbkfile -AccessKey $AccessKey -SecretKey $SecretKey).key

#if else to check if the file was uploaded successfully
if ($vbks3 -ne $vbkfile)
Write-Log "File upload to S3 was unsuccessful. Terminating script now!"
Write-Log "File upload to S3 was successful."

#Define Temp vbk file path and name
$temppath = "$temproot\$vbkfile"

#Download vbk file from S3 into Temp path
Copy-S3Object -BucketName $bucketname -Key $vbkfile -LocalFile $temppath -AccessKey $AccessKey -SecretKey $SecretKey

#Get Temp vbk file downloaded into Temp path
$tempfile = (Get-ChildItem -Path $temppath).Name

#if else to check if the file was downloaded successfully
if ($tempfile -ne $vbkfile)
Write-Log "File download from S3 was unsuccessful. Terminating script now!"
Write-Log "File download from S3 was successful."

#Generate hash for Temp vbk file
$hash2 = (Get-FileHash $temppath -Algorithm MD5).hash

#if else to compares hashes and remove Temp file if hashes are equal
if ($hash2 -ne $hash)
Write-Log "Hash tags are not equal! Terminating script now!"
Write-Log "Hash=$hash Hash2=$hash2 Hash tags match!"

#Remove Temp vbk file
Remove-Item –Path $temppath –recurse
$temppath2 = (Get-ChildItem -Path $temproot | Measure-Object).count

#if else to chrck if file is deleted or not
if ($temppath2 -eq 0)
Write-Log "Removed Temp vbk file successfully."
else {
Write-Log "Couldn't remove Temp vbk file!"

Authenticating Office 365 users using on-premises AD DS

This blog post covers the integration methods for allowing administrators to authenticate their Office 365 users using on-premises Active Directory Domain Services (AD DS), particularly focusing on the method which involves utilizing Active Directory Federation Services (AD FS).

This post is not intended as a step-by-step guide on how to implement integration between Office 365 and the on-premises Active Directory Domain Services!!!  It’s only intended as a high level overview.

Continue reading

Dropbox on Server 2012 R2 closes unexpectedly

The Dropbox app closes unexpectedly (service continues to run) while running on Server 2012 R2.  When the app closes syncing stops and users start complaining.  So I wrote the following script and scheduled it to run every 30 minutes.

FYI… although it installs, Dropbox is not officially supported on Server systems.

Script checks to see if the Dropbox process is running, if it is, it simply quits after displaying a message, if it’s not, then it tries starting the Dropbox process and sends an email out if the process gets started successfully.

$Dropbox = Get-Process "Dropbox" -ea SilentlyContinue
if ($Dropbox) {
echo "Dropbox is running!!!"
Else {
Start-Process -FilePath "C:\Program Files (x86)\Dropbox\Client\Dropbox.exe"
$Dropbox = Get-Process "Dropbox" -ea SilentlyContinue
if ($Dropbox) {
echo "Dropbox has been started!!!"
Send-MailMessage -to "administrator@domain" -from "email@domain" -subject "Dropbox has been started!!!" -body "The Dropbox process on server SERVER NAME (SERVER IP) wasn't running, a
script has started the Dropbox process." -SmtpServer EMAIL SERVER IP -Port EMAIL SERVER PORT

CUCM integration in a Multi-Forest environment

Only a single Active Directory Forest can be integrated with Cisco Unified Communication Manager (CUCM) to get user information and perform authentication.

In Multi-Forest environments you can utilize AD LDS (Lightweight Directory Services), formerly known as ADAM, to get user information and perform authentication from different AD domains that exist in different forests.

AD LDS is a Lightweight Directory Access Protocol (LDAP) directory service that provides flexible support for directory-enabled applications, without the dependencies that are required for Active Directory Domain Services (AD DS). AD LDS provides much of the same functionality as AD DS, but it does not require the deployment of domains or domain controllers. You can run multiple instances of AD LDS concurrently on a single computer, with an independently managed schema for each AD LDS instance.

This was my first time configuring AD LDS. Hence I had to reference a number of blog posts and a load of Microsoft documentation to get it working.  In all honesty, it has been an absolute nightmare. I’m hoping this post will save you from all the headache I’ve endured.

The step-by-step instructions I’ve given below follow the official guide produced by Cisco.  Make sure you have it open as you work through my instructions, as I do reference the Cisco guide often (there was no point in posting some of the same instructions in the Cisco guide).

Continue reading