Ever wondered about securing your Citrix ADC (formerly NetScaler) or Gateway implementation further with all the DDoS news going around of late. If you already have a NetScaler/ADC implementation, you can easily leverage it and configure Rate Limiting feature which is a fantastic weapon to stop such threats and keep the malicious actors at bay. This could be even be implemented on NetScaler/ADC Standard edition so there is no excuse. If you are interested in improving the security posture of your Citrix ADCs/Gateways/SD-WANs/SDXs or CPXs, then read on.
Common use-cases for Rate limiting
Limit the number of requests per second from a URL.
Drop a connection based on cookies received in request from from a particular host if the request exceeds the rate limit.
Limit the number of HTTP requests that arrive from the same host (with a particular subnet mask) and that have the same destination IP address.
Create Rate Limiting Policies
We are going to utilize the Responder feature to complete the configuration. So when you are ready to get started, logon to the NetScaler console with root privileges (nsroot preferably) and follow the below steps
You will need to then navigate to AppExpert node in the management portal. This is where you will find the Rate Limiting policies.
Expand AppExpert and select Rate Limiting
Expand Rate Limiting and click on Selectors
Click Add and enter a name for the Selector
Select the expressions as follows ( Note that there is a DOT after REQ for Expression 2)
Now, how do you test this? It isn’t easy with the existing threshold values. So to make the testing easier, I would adjust the threshold numbers to something that we could easily achieve. For eg, if we reduce the threshold value to 10 and time slice to 10000 msec, that literally means we only need to perform 10 requests in a matter of 10 secs. You can also bump up the timer to 20 sec(20000 msec) if you think 10 secs are still harder. For that you will need to go to the limit identifier that you set up earlier and adjust the value as below.
Now, open a browser page and navigate to the URL in question. Refresh the page a few times and after 10th successful attempt, 11th attempt will be dropped/reset by the NetScaler. End user may see something like the below
Of course, there are other selectors that you can target instead of the Client IP and HTTP URL that I have used for my example. Below are some useful links to get started if you want more literature to read on.
The Network Policy Server (NPS) extension for Azure MFA adds cloud-based MFA capabilities to your authentication infrastructure using your existing servers. With the NPS extension, you can add phone call, text message, or phone app verification to your existing authentication flow without having to install, configure, and maintain new servers.
This extension was created for organizations that want to protect VPN connections without deploying the Azure MFA Server. The NPS extension acts as an adapter between RADIUS and cloud-based Azure MFA to provide a second factor of authentication for federated or synced users.
When using the NPS extension for Azure MFA, the authentication flow includes the following components:
NetScaler receives requests from VPN clients or Citrix ICA Proxy users and converts them into RADIUS requests to NPS servers.
NPS Server connects to Active Directory to perform the primary authentication for the RADIUS requests and, upon success, passes the request to any installed extensions.
NPS Extension triggers a request to Azure MFA for the secondary authentication. Once the extension receives the response, and if the MFA challenge succeeds, it completes the authentication request by providing the NPS server with security tokens that include an MFA claim, issued by Azure STS.
Azure MFA communicates with Azure Active Directory to retrieve the user’s details and performs the secondary authentication using a verification method configured to the user.
There are some requirements that are needed to be met for deploying this solution.
The NPS Extension for Azure MFA is available to customers with licenses for Azure Multi-Factor Authentication (included with Azure AD Premium, EMS, or an MFA stand-alone license). Consumption-based licenses for Azure MFA such as per user or per authentication licenses are not compatible with the NPS extension.
These libraries are installed automatically with the extension.
The Microsoft Azure Active Directory Module for Windows PowerShell is installed, if it is not already present, through a configuration script you run as part of the setup process. There is no need to install this module ahead of time if it is not already installed.
Azure Active Directory
Everyone using the NPS extension must be synced to Azure Active Directory using Azure AD Connect, and must be registered for MFA.
When you install the extension, you need the directory ID and admin credentials for your Azure AD tenant. You can find your directory ID in the Azure portal. Sign in as an administrator. Search for and select the Azure Active Directory, then select Properties. Copy the GUID in the Directory ID box and save it. You use this GUID as the tenant ID when you install the NPS extension.
The NPS server needs to be able to communicate with the following URLs over ports 80 and 443.
Verify that your sync status is Enabled and that your last sync was less than an hour ago.
Determine which authentication methods your users can use
There are two factors that affect which authentication methods are available with an NPS extension deployment:
The password encryption algorithm used between the RADIUS client (VPN, Netscaler server, or other) and the NPS servers.
PAP supports all the authentication methods of Azure MFA in the cloud: phone call, one-way text message, mobile app notification, OATH hardware tokens, and mobile app verification code.
CHAPV2 and EAP support phone call and mobile app notification.
The input methods that the client application such as VPN, Netscaler, or others can handle. For example, does the VPN client have some means to allow the user to type in a verification code from a text or mobile app?
Register users for MFA
Before you deploy and use the NPS extension, users that are required to perform two-step verification need to be registered for MFA. More immediately, to test the extension as you deploy it, you need at least one test account that is fully registered for Multi-Factor Authentication.
Install the Network Policy Server role in your environment. You can choose to install this on any domain joined Server OS machine in the network.
Ideally, you would want to sit close to your Active Directory server just to make it quicker to send traffic for Authentication and Authorization. Or Just install this straight on your AD server, it’s totally up to you.
Installing the NPS role is dead easy. Just fire up your Server Manager and go to Manage – Add Roles and Features. Select Network Policy and Access Services
It will ask you to install Remote Server Administration Tools. Say Add Features.
Click Next (3 times) until you reach the Confirmation page. Click Install
Once installed, you will need to register the server in Active Directory.
Open the NPS console as below and right click the NPS node and click Register Server in Active Directory
Now it’s time to install the NPS extension for Azure.
Installing and Configuring NPS Extension for Azure MFA
Once downloaded, run the NpsExtnForAzureMfaInstaller.exe as an Administrator. If you want to change the install location, Click Options and choose a different location.
if not, just Click Install
The setup is quick. Click Close, once finishes.
Open PowerShell as Administrator. You have to have your Azure Portal admin credentials handy before this step.
Navigate to the install location for NPS Extension C:\Program Files\Microsoft\AzureMfa\Config using PowerShell.
Run the Powershell script in that directory AzureMfaNpsExtnConfigSetup.ps1 as below
PowerShell will begin the installation of NuGet provider assemblies including MSOnline cmdlets
It’s gonna tell you that you are installing this from an untrusted repository. Just say A for Yes to All and continue.
Now, PowerShell will take you to portal.azure.com where you will need your Azure AD admin credentials to login.
Login with your Azure credentials
At this stage, it will ask for the Tenant ID. Copy the Directory ID and paste it in the PS window
It does a few things as below
## It creates a Self-Signed certificate
## It grants private key access to NETWORK SERVICE
## Restarts the NPS Policy Service
You may now exit out of PowerShell as it is time to configure NPS.
Configure RADIUS Clients
Open the NPS console and navigate to RADIUS Clients and Server Folder
Expand the folder and Right Click on RADIUS Clients
Configure the settings as below
Give it a Friendly Name
Enter the IP address of the NetScaler (NSIP)
Enter a Shared Secret Key (Save this key as we will need this later)
Add all the RADIUS clients following the steps above. If you set this up on a NetScaler HA configuration, you will have 2 NetScaler NSIPs to add. You should something similar as follows.
Configure Remote RADIUS Servers
Select the node – Remote RADIUS Server Groups
Right- click and select New
Give a Group Name
Type the IP address or name of the Active Directory Domain Controller Server in there and Click OK.
You can choose to add the FQDN of the domain controller or just use the IP address. You can multiple DCs in here for redundancy.
Click on the Authentication/Accounting tab. Configure it as below
Click on the Load Balancing tab now and supply the weightage to the servers if you are adding multiple AD servers.
You can also configure the Timeout settings in here.
Notice that I have increased the timeout values to 60. This is important when using phone calls and SMS based authentication because they take more time. Even when using the Microsoft Authenticator app, default values are a little too less, so adjust it according to your environment.
Add all the servers that you intend to use as Domain Controllers in here.
Configure Connection Request Policies
It is time now to create a Connection Request Policy. We need a couple of them for this deployment. There are a few things to keep in mind as follows before we proceed to create the policies.
The default built-in connection request policy uses NPS as a RADIUS server and processes all authentication requests locally.
To configure a server running NPS to act as a RADIUS proxy and forward connection requests to other NPS or RADIUS servers, you must configure a remote RADIUS server group in addition to adding a new connection request policy that specifies conditions and settings that the connection requests must match.
If you do not want the NPS to act as a RADIUS server and process connection requests locally, you can delete the default connection request policy.
If you want the NPS to act as both a RADIUS server, processing connection requests locally, and as a RADIUS proxy, forwarding some connection requests to a remote RADIUS server group, add a new policy using the following procedure and then verify that the default connection request policy is the last policy processed by placing it last in the list of policies. This is the approach we are using for NetScaler deployment.
Create a Connection Request Policy for No Forward
Open the NPS server console and expand Policies node
Right Click Connection Request Policies and choose New
Give the policy a Name
Select Client IPv4 Address
Click Add again
Specify the Client IP v4 Addresses – This will be the NetScaler NSIP if RADIUS isnt load balanced. If load balanced, you must use the Subnet IP of the NetScaler (SNIP)
Configure Authentication as below
Configure the Authentication exactly as below
Click Next a couple of times until the Summary page is reached.
Create the second Connection Request Policy for Forwarding
Right Click Connection Request Policies and choose New
Give the policy a Name
Select NAS Identifier
Click Add again
Enter the name of the NAS Identifier – MFA
Configure the Authentication as below – MS-CHAP-v2
If you are on the Summary page, click Finish
The two connection request policies should be moved up in the policy priority order and should look like the below.
Create the Network Policy
Go to the Network Policies node
Right Click and select New
Give the policy a Name
Select NAS Identifier
Enter MFA in there
Click Add again
Configure the Authentication methods as below
a few more extra clicks will get you to the Summary page.
Click Finish on the Summary page.
Make the policy that we just created higher up in the order.
Disable the existing or built-in Network policies.
Disable the existing Network Policies (Default)
Move the new Network Policy to the top and assign it priority 1
You can now proceed to create your vServer in NetScaler. It could be a NetScaler Gateway or a VPN vServer. In this post, i will not be showing how to create a NetScaler vServer. It is fairly straightforward and there are tons of blog posts on it on the internet. You will just need to set eveything up just like how you would setup a single factor Gateway portal in NetScaler.
You will need to make sure that ports 1812 and 1813 are open from the NetScaler to the backend NPS server (bi-directional)
If you have multiple subnet IPs on the NetScaler, use a Net profile to isolate traffic to a particular source IP address.
If you aren’t load balancing NetScaler, NSIPs are the source IP address. Otherwise SNIPs will need to be used. (The client IPv4 address entries that you made in the previous step will change accordingly)
Create RADIUS Policies and Profiles
Go to NetScaler Gateway node – Policies – Authentication – RADIUS
Go to Servers tab and click Add
Give a name to the Server profile
Enter the IP address of the NPS server
Port is 1812
Enter the Shared Secret Key
Change the time out to 60 seconds if you intend to use phone calls, SMS or phone app auth.
Test the connection and ensure that you get all green
Enter the NAS ID here – MFA
Password encoding as mschapv2
Similarly, create additional RADIUS servers using the same steps above.
Create RADIUS policies now to attach the RADIUS server profiles so that it could be bound to vServers.
Create a RADIUS policy and attach the profile as below
Once, your vServer is ready, the RADIUS policy could be attached to the vServer as a primary authentication. Doing this will still perform Active Directory LDAP authentication after which the NPS extension will check the second factor authentication.
You can now test with an account that is MFA enabled. If everything is setup correctly, MFA will work fine and prompt with a second factor.
Always check the Authentication server status of RADIUS server in NetScaler. It should be green when the traffic is allowed. if it is not, check why? Work with your NW team to figure out why the traffic doesnt reach the NPS backend or being returned back.
Add a DNS A record entry for the Remote URL for Citrix access
If the NetScaler IPs (NSIP) don’t work, try the Subnet IP as RADIUS clients. If you make a change, ensure that the change is reflected in the Network Policies too.
On NetScalers where multiple subnet IPs are used, isolate the traffic using NET Profiles.
Check aaad.debug logs on NetScaler.
if you get the below, it is mosty likely an issue with the RADIUS client IPs. It is just that the wrong IP is being used.
Look out for Routing issues. If your NPS servers are sitting in a different subnet as compared to NetScaler IPs, looking at the Route table could shed some light. If routes are missing, add them. But please remember not to break existing traffic. If unsure, ask the network guy for assistance.
Check the Dial-In tab in AD properties for the user. Ensure that the user is allowed access. Or You can configure NPS to override the AD settings by setting the below (look for the red dot below)
Event Logging – Ensure that NPS logs are turned ON. Log files will be found at C:\Windows\System32\LogFiles. Make sure that the logs are set to DTS compliant. Event Viewer is also a reliable source.
If you don’t want to limit non-MFA users from accessing the portal, you can add the below registry keys to the NPS servers. This will allow users who aren’t registered in Azure MFA to continue to authenticate using LDAP authentication. This is vital during migration phase. However, this setting must be removed before you move into production.
Microsoft Windows Virtual Desktops (WVD) has been making a lot of waves in the EUC industry ever since it was announced by Microsoft in September 2018.
Windows Virtual Desktop (WVD) is a desktop and application virtualization solution that runs from Microsoft Azure. Unlike, Microsoft’s previous foray into the application and desktop virtualization markets in the past with Microsoft RemoteApps which didn’t take off quite well, this time I believe they have a compelling product in their hands.
WVD provides an impressive list of things to the companies who want to adopt it. The important benefits are quoted below.
Set up a multi-session Windows 10 deployment that delivers a full Windows 10 with scalability
Virtualize Office 365 ProPlus and optimize it to run in multi-user virtual scenarios
Provide Windows 7 virtual desktops with free Extended Security Updates – This is big for a lot of companies around the world who aren’t ready to migrate to Windows 10 yet.
Bring your existing Remote Desktop Services (RDS) and Windows Server desktops and apps to any computer
Virtualize both desktops and apps
Manage Windows 10, Windows Server, and Windows 7 desktops and apps with a unified management experience
Below are the licensing requirements for running WVD in Azure.
Your infrastructure should meet the following requirements to support Windows Virtual Desktop:
So, what is Citrix doing here and how does Citrix add value to the WVD offering? WVD by itself is a perfect fit for a lot of businesses out there, mostly the start-ups and SMBs. What if we combine WVD with Citrix? That’s a deadly combo right there. Citrix could take Microsoft’s WVD offering to the next level by wrapping a management layer around it, offering flexibility, choice, cost optimization and enhanced security.
The enhancements that Citrix provide to WVD offering is best depicted in the picture below (courtesy of Citrix).
Citrix has developed special optimization packs for Microsoft Teams and Skype for Business which makes a world of difference, if businesses want to run these collaboration tools in a virtualized infrastructure. Without the optimization packs, it’s virtually impossible to deliver good user experience with Teams and Skype for Business when using Audio, Video and Screen Sharing. Running single-session VDI workloads still won’t cut it either.
Hybrid Cloud Model – WVD would only lets you run your multi-session Win 10 workloads in Azure. Citrix could further compliment that approach to run your traditional RDSH workload wherever you would like – on-prem, Azure, AWS, Google Cloud, Oracle Cloud or on HCI solutions such as Nutanix. Customers can combine WVD with RDSH workloads and manage them via a single console.
Use Citrix HDX which is the best of the breed in remoting protocols.
Citrix Machine Creation Services (MCS) lets rapid creation of virtual machines with minimal infrastructure utilizing the hypervisor APIs.
AutoScale – Customers could quickly ramp up and down workloads on-demand. These days, customers have an option of doing vertical load balancing which brokers user load/sessions to a single machine until a desired level is reached after which the connection gets routed to the next workload until it gets fully loaded. This is so much useful in cost optimization and reduces the overall Total Cost of Ownership (TCO) by reducing the numbers of extra servers used.
Advanced Monitoring – Citrix has its own repertoire of monitoring tools on top of Microsoft’s Azure-based monitoring.
App Layering – Citrix App Layering radically reduces the time it takes to manage Windows applications and images. It separates the management of your OS and apps from your infrastructure. You can install each app and OS patch once, update the associated templates, and redeploy your images.
App Protection is an add-on feature that provides enhanced security when using Citrix Virtual Apps and Desktops published resources.
Session Recording allows you to record the on-screen activity of any user session hosted on a VDA for Server OS or Desktop OS, over any type of connection, subject to corporate policy and regulatory compliance. Session Recording records, catalogs, and archives sessions for retrieval and playback.
Citrix Analytics – AI driven performance and security analytics to businesses that deploys Virtual apps and desktops service.
Citrix SD-WAN – Citrix SD-WAN is a next-generation WAN Edge solution that simplifies digital transformation for enterprises. It offers comprehensive security, the best application experience for SaaS, cloud, and virtual apps and desktops.
With Citrix and WVD combo, customers can bring the multi-factor authentication vendor of their choice such as Okta, OAuth-based authentication, RADIUS-based multi-factor auth and so on.
Let’s Bust a Myth
This may come as a surprise for many of you who are working in the EUC space. A lot of the folks in the industry are thinking that in order to use WVD, you will need to buy Citrix Managed Desktops which is new product offering from Citrix and that is the only offering entitled to use WVD. That isn’t true at all.
You could use a plethora of the following services from Citrix and enjoy the full benefits and simplicity that WVD has to offer. In summary, if you are an existing Citrix Cloud customer that utilizes any of the below services from Citrix, you are entitled to WVD as well.
Let’s conclude this. Citrix’s offering isn’t really trying to compete with Microsoft’s WVD, but rather they are complementing each other by providing more choices to the customers who want to run their VDI and RDSH workloads in the cloud. Isn’t it great to have choices in life? 🙂
If you have noticed the Restart button for published desktops in Citrix Virtual Apps and Desktops 7 1912 LTSR recently and wondered why in the world Citrix would give users access to users to restart machines, you are not alone. Make no mistake, this is a perfectly fine setting to be enabled out-of-the-box for VDI deployments where just Desktop OSes are being published or on the delivery group that contains Desktop OSes. You would want your users to be able to restart the desktop every now and then anyway.
Now after going through the Citrix SDK documentation, I found the below notes for the -AllowRestart argument that governs the restart button.
AllowRestart (System.Boolean) Indicates if the user can restart sessions delivered from the rule’s desktop group. Session restart is handled as follows: For sessions on single-session power-managed machines, the machine is powered off, and a new session launch request made; for sessions on multi-session machines, a logoff request is issued to the session, and a new session launch request made; otherwise the property is ignored.
So, it isn’t too bad to have that button available for RDSH delivery groups but should probably be called something else. The name “restart” has a negative vibe to it in multi-session world. lol
The option\button will appear like the below.
How would you remove the Restart option?
You will need to do this via Powershell.
Find the delivery group that has RDSH based published desktops and take a note of the Name parameter. You can do this on all the delivery groups if you want to disable this button for all published desktops, both RDSH and VDI.
Run the below command to find the value for the delivery group that you want to turn OFF the setting for. The parameter we are looking for is AllowRestart. When the value is True, Restart button is shown. Setting it to False will remove the button from Storefront.
Citrix Machine Creation Services (MCS) is a compelling technology these days for provisioning virtual machines quickly and easily in Citrix environments. The whole technology is built around simplicity and requires just a supported hypervisor that utilizes snapshots to create additional VMs on the fly. There isn’t much required from a supporting infrastructure point of view as well. If you have a robust hypervisor with enough space in the storage array, MCS will work just fine. All that you would require is a service account with defined permissions for the whole thing to function.
If anyone wants to know what permissions are required for the MCS service account to function correctly, that could be found in the following Citrix official links.
I will even argue that MCS is just as good as another provisioning technology from Citrix, named Citrix Provisioning (formerly Citrix Provisioning Services or PVS) with the recent advancements it has made. There are scenarios when Citrix PVS is the better choice but that is a topic for another blog post.
While we are in the midst of Coronavirus pandemic and everyone is staying at home safe and sound, I have had some pleasant experiences working with MCS spinning up extra virtual machines for my customers here in Auckland, as they needed to ramp up their farm capacity to cater to the extra load. I could literally spin up machines in seconds(I am not exaggerating even a bit…) and just be ready for the incoming wave of Citrix users.
In this blog, let’s discuss how Citrix MCS works in general and what happens under the hood when MCS creates virtual machines. Let’s also discuss and compare Citrix MCS in an On-Prem setup versus MCS in Azure. I also have to say that most of the diagrams that you see below are shameless copies of Citrix’s own diagrams used in one of their webinars. I don’t mind accepting that 🙂 Now, that’s out of the way, let’s dive right in.
Citrix MCS – On Prem
So how does Citrix MCS works with your On-Prem hypervisor?
You create a master template or gold template and make all the changes that you want to it.
Once you are happy with the changes to the VM, go to your hypervisor console and take a snapshot of the VM.
After that you go to the Studio console and either add a new Machine catalog or add machines to an existing catalog. At that time, a full copy of the VM’s base image disk is taken and copied to the first storage repository (Datastore for VMware folks!).
Now it creates a Preparation VM and it is going to get interesting from now on.
To the Preparation VM, an Instruction Disk is attached. This will strip out all the previous identity information from the prep VM. In other words, it de-personalize the VM so that a fresh identity could be assigned to it at a later stage.
Now its time to power ON the Preparation VM.
The Image preparation process begins in step 7.
The Preparation VM now updates the snapshot A’ along with the original snapshot.
The Preparation VM is shutdown after this stage.
The Instruction Disk is deleted.
The OS disk is detached and the preparation VM is also deleted.
The update snapshot A” is now replicated to each storage repository(or Datastores in VMware). The image is now ready to deploy.
MCS now creates copies of that image and in that process creates Identity Disks that differentiates the VM from others. If you create more than one VM in the Machine catalog, more Identity Disks are created and will assigned to each image.
Next step is creating the required number of VMs by attaching the Identity Disks and Differencing Disk. Since all the VMs are sharing a single snapshot, the snapshot is read-only. Any changes, additions or runtime area is added to the Differencing Disk. The on-prem hypervisor is now leveraged to merge the disks to produce the virtual machines.
Identity Disks are 16 MB in size and are read-write capable. This makes them reusable for future VM creation. Delivery Controllers are responsible for creating Identity Disks.
Citrix MCS – Azure
Now let’s look at how MCS works in Azure. It’s mostly the same steps except for a few key differences. In the on-prem version, depending on the hypervisor used, the file formats could vary as in VMDK for VMware vSphere and VHD for Hyper-V or Citrix Hypervisor. With MCS in Azure, the disk file format is VHD as it is based on Azure Hypervisor which is a customized version of Hyper-V.
You create a Master VM to make further copies of it just as in traditional MCS setup.
The Master VHD is created in a Storage Account. This is the master storage account.
We then run the MCS Wizard via the Studio if you use the Citrix Cloud service or from Azure Portal if you are subscribed to Citrix Virtual Apps and Desktops Essentials.
The MCS Wizard checks for the availability of the resources using the Azure API.
We will now create a Resource Group (RG) to host all the additional VMs that MCS will create in Azure. One RG could host upto 240 VMs.
Storage Accounts are created within the Resource Group to host the disks for the virtual machines. One storage account can host up to 40 VMs. Additional storage accounts are created depending on how many VMs we need.
Network security Groups (NSG) are created next and they will isolate the prepped VM from the rest of the network. If we need 400 VMs, two RGs will be created to host all the VMs.
Next step is validate the connections. The Service principal connectivity will be validated to access the Azure resources.
The image is consolidated and is prepared for copy. Remember the image is located in the Master storage account in the steps above.
The Master Image is copied to the other Storage accounts defined for the machine catalogs. Unlike other hypervisor approaches, we don’t need to create snapshots ourselves in this occasion. Azure based Citrix MCS will use the provisioning APIs in Azure to set this all up for us.
The Identity disk for the Preparation VM is created but not attached yet.
Preparation VM (A’) is created after that.
Once the Prep VM is ready, it is stopped to attach the Identity Disk.
At this stage, the Identity Disk is attached to the Preparation VM.
The Preparation VM is started again for further steps.
Once the preparation steps are completed, the VM is stopped.
Preparation VM disk is now copied to the new Storage Account that is defined for MCS. This is the Base Image.
The base image is replicated to other storage accounts within MCS.
The Preparation VM and its’ Identity Disk is now deleted.
Then we have a Pre-Flight check where all the created resources are checked for its integrity by MCS. Now the Base Image ready to be cloned to make more VMs.
Storage Accounts – Legacy Approach
Now, there are two approaches here – Storage Accounts (Legacy ) and On-Demand Provisioning. Let’s discuss Legacy approach until steps 21 to 25.
Identity Disks are created for the required number of VMs that will be created by MCS.
OS Disks (from Base Image) are also created followed by Identity Disks.
VM are provisioned and linked to the OS Disks.
Identity Disks are attached to the VMs.
VMs are stopped to avoid extra costs during billing. (This is the case of VDI machines). When users connect, the machines are started on-demand and VMs are fired up ready for action.
In On-Demand provisioning method, MCS will keep all the required settings within the database and will create VMs only when it is required in an on-demand fashion and not pre-created as in traditional MCS.
Only identity Disks and NICs are created during MCS in this approach.
You would have noticed by now, instead of Storage Accounts, Azure Managed Disks are being used here.
When there is user traffic in the farm, Citrix VDAs are created on-demand. As a part of that step, OS Disk is created at VM launch time.
VMs are created and linked to the OS Disks at VM launch time.
As a final step, Identity Disks are attached to the VM at launch time before the VM is ready to serve the users.
Once the VM is no longer needed, the VM is shutdown and deleted.
OS Disks are also deleted post shutdown.
However, the Identity Disks and NICs are retained for future use. When the VMs are required again, the OS Disk will be attached, merged with the Identity Disk before it is available to be used again.
That’s about it peeps. Happy MCSing in the cloud!!