Storefront Load balancing using NetScaler


It’s been a while since I wrote on my blog so let’s get straight into the post without much mucking around. This time we will discuss how to go about setting up Storefront load balancing using NetScalers. This can be configured on a standalone NetScaler or a NetScaler pair in HA. The recommendation is obviously to get this setup on a HA NetScaler pair so that NetScaler outage wouldn’t result in Storefront also being unavailable.

My Storefront version is 3.11 and have a cluster with 2 Storefront servers. NetScaler version is 11.1 but the NS version shouldn’t matter much as the steps would be more or less the same for other NetScaler firmware versions – newer or older. (unless you are too far behind)

Pre-Requisites

To configure Storefront load balancing we need the following –

  • 2 or more Storefront servers
  • an IP address for the virtual server that hosts the LB configuration
  • SSL certificate that points to the intended load balanced URL of Storefront – the certificate can be a wild card or a named certificate

First Things First

Logon to your NetScaler and navigate to System — Settings — Configure Basic Features. Ensure that Load Balancing is selected, if not select it and click OK

1

NetScaler Configuration

Create Servers

Now, navigate to Traffic Management — Load Balancing — Servers. Click Add

2

Give the Storefront server a name and enter the IP address of the server. Ensure that “Enable after creating” is selected. Click Create

Add the second Storefront server following the above steps. If you have more than 2  servers, add all of them.

3

Create Monitors

New NetScaler version come with a built-in Storefront monitor so we are going to make use of it here. Go to Traffic Management –Load Balancing — Monitors and click Add

Here I am only going to create a single monitor to probe all my Storefront servers. You can choose to create multiple monitors depending upon the number of Storefront servers that you have. In my case, i will create just one.

Give a name to the monitor and select the type as STOREFRONT

5

Now select Special Parameters tab and provide the name of the Store that you have created in Storefront. Check the 2 entries – Storefront Account Service and Check Back End Services. 

4

Click on the Standard Parameters tab. Ensure that Secure is selected as below. Click Create

6

Create Service Groups

Go to Traffic Management –Load Balancing — Service Groups

Give a name to the service group and select the protocol as SSL. Check the entries below

  • State
  • Health Monitoring
  • AppFlow Logging (only if you have NetScaler MAS in your environment)

Click OK

7

Under Service Group Members, add the server entities that we created earlier. Once done, they will look like the below

8

Under Settings, type the Header as X-Forwarded-For

9

Under Monitors, bind the monitor that we created before

10

Under SSL Parameters, setup the settings as below

11

Under Ciphers, setup the ciphers based on your company security policy.

12

Once done, Service Group for Storefront should look like this

13

Now, it’s time to create the Virtual Server

Virtual Server

As mentioned in the pre-requisites section , we need an IP address for this. If the NetScalers are sitting in the DMZ, a DMZ IP address is required. In my case, NetScalers are hosted internally so i will use an internal unused IP address.

We will also need the SSL certificate here.

Go to Traffic Management –Load Balancing — Virtual Servers

Click Add

Give a Name to the virtual server and select the protocol as SSL

Specify the IP address under IP Address field and specify the port # as 443

14

Click More and specify the settings as below (note, that AppFlow logging only needs to be enabled if you have a NetScaler MAS setup or other monitoring solutions that could make use of AppFlow logs)

15

 

Under Services and Service Groups, click on Load Balancing Virtual Server ServiceGroup Binding

Click Add Binding and select the Service Group that you created in the previous step. Click OK

Once completed, the page should look like the below. Click Close and click Done

16

It’s time to attach the certificate. Go to Traffic Management — SSL — Manage Certificates / Keys / CSRs

17

 

Click on Upload button and upload your certificate file to NetScaler

Go to Traffic Management — SSL — Certificates — Server certificates

Under Certificate, click on Server Certificate and then Install

Give a certificate key-pair name and choose the certificate that was just uploaded in the previous step. Click Install

Now, go back to Traffic Management –Load Balancing — Virtual Servers

Select the Virtual server created for Storefront and click Edit. Under Certificates, select Server Certificate and then Click Add Binding

Under SSL Ciphers, select the ciphers that you would like to be in place. I am going with the default one. This is not the most secure for a production setup so go with something that’s secure enough for your organization.

Under SSL Parameters, configure the settings as below. Click OK

18

Under Method, Select LEASTRESPONSETIME for the Load Balancing Method. Configure a Backup LB Method, I choose LEAST CONNECTION

You can read more about the LB Methods here

19

Click OK

Under Persistence, select COOKIEINSERT for Persistence with a time-out value of 0. You can also read why I selected the timeout value of 0 here

Under Backup persistence, select SOURCEIP with a timeout of 60. Fill in the Netmask as in the picture

20

Click OK and then Done

We have now completed almost 90% of the config. There are a couple of things left so hold on tight.

The configuration so far will ensure that load balancing will be performed between the Storefront servers ( I know, i know I haven’t setup the DNS entries for the load balanced VIP)

If someone type in the http URL of LB Storefront in their browser, it will not go anywhere. It will show them the IIS page instead. So how do we ensure that the users are redirected to the correct Storefront page (https version) every single time? We will setup another virtual server on port 80 with a redirect URL configured.

Let’s do that now.

Under Traffic Management –Load Balancing — Virtual Servers, Click Add

Under Basic Settings, give the virtual server a Name and select protocol as HTTP

Specify the same IP address as for the Storefront LB VIP and provide 80 for the Port #

Click OK/Create

Under Persistence, select SOURCEIP with a timeout of 2 mins

21

Click OK

Under Protection, type in the correct HTTPS URL that you would want the users to be redirected to under Redirect URL field

22

Click OK. Then click Done

You will notice that the virtual server will be marked as down

23

DNS Changes

Now head over to the DNS server and open the DNS Console

Create an A record pointing to the Storefront LB name with the IP address configured on the vServer for LB configuration.

Storefront Changes

This is the last step, I promise. Head over to the Storefront servers and it’s time now to run some Powershell commands

Now, the monitors that we created earlier will be marked as Down if we didn’t perform this step prior to creating them on the NetScaler. That’s because the monitor created was based on HTTPS and by default, Storefront monitoring is done on HTTP

To change this to HTTPS. We need to configure the monitor service to use HTTPS instead. On all the StoreFront 3.0 servers perform the following steps.

Run PowerShell as an administrator.

Navigate to the Scripts (C:\Program Files\Citrix\Receiver StoreFront\Scripts) folder via the Powershell on the Storefront server,

Run ImportModules.ps1

24

Run the below command

 Get-DSServiceMonitorFeature

25

Now, type the below to setup the Storefront Monitor on HTTPS

Set-DSServiceMonitorFeature -ServiceURL https://localhost:443/StoreFrontMonitor

 

Repeat the above steps on all the Storefront servers.

Now, head back to the NetScaler and you can see that the monitor will be in GREEN and showing a status of UP

That’s all we need to do to setup Storefront load balancing using NetScalers.

 

 

 

 

Advertisements

Don’t let your user-experience be a “Spectre” of itself after “Meltdown”


Bust your ghosts not your user experience

The names Spectre and Meltdown invoke feelings of dread in even the most seasoned IT engineer.  To those uninitiated, let me get you up-to-speed quickly.

Spectre is a vulnerability that takes advantage of “Intel Privilege Escalation and Speculative Execution”, and exposes user memory of an application to another malicious application.  This can expose data such as passwords.

Meltdown is a vulnerability that takes advantage of “Branch prediction and Speculative Execution”, and exposes kernel memory.  A compromised server or client OS running virtualized could gain access to kernel memory of the host exposing all guest data.

Both vulnerabilities take advantage of a 20-year-old method of increasing processor performance.

Server_Protection

As a result, code will need to be updated to address these vulnerabilities at OS and OEM-manufacturer levels, at the expense of system performance.

On their part, Microsoft reluctantly admits that performance will suffer.  “Windows Server on any silicon, especially in any IO-intensive application, shows a more significant performance impact when you enable the mitigations to isolate untrusted code within a Windows Server instance,” wrote Terry Myerson, Executive Vice President for the Windows and Devices group.

According to Geek Wire, these two vulnerabilities which take advantage of a 20-year-old design flaw in modern processors can be “mitigated;” the word we’re apparently using to describe this new world in 2018, in which servers lose roughly 10 to 20% performance for several common workloads.

This affects not only workloads executed against local, on-site resources but even those utilizing services, such as AWS, Google Public Cloud or Azure.

cpu_utilReader submission @ The Register showing CPU before / after patches

We’ve heard from some of our insiders who use Login VSI to validate system performance that they’re seeing a reduction of 5% in user-density after performing Microsoft recommendations. Knowing that the vulnerability wasn’t solved by OS updates alone we, at Login VSI, wanted the ability to test the impending hardware vendor firmware / BIOS changes.

Now is the time to capture your baseline performance

How do you know how much of an impact the fixes for Spectre and Meltdown will be if you don’t have anything to compare it to? Keep in mind that these patches will need to be installed on a number of systems in your solution including server hardware, operating systems, storage subsystems and so on.

Many of our customers perform tests where they compare a known good solution, or a baseline, with changes that have been made. This gives them the ability to accurately assess the performance impact of that change, which in turn allows them to compensate with more hardware, or further tuning of the applications and OS. The patented methods used by Login VSI provide a quantifiable result for determining the impact of a change in virtual desktop and published application environments.

Using Login VSI

If you wish to test the changes before pushing them into your production environment, then use Login VSI to put a load, representative of your production users, on the system. This will objectively show how much more CPU will be used as a result of the Spectre or Meltdown patches. It is expected that the end users will incur increased latency to their applications and desktops as a result of the higher CPU utilization.

Using Login PI

While it is not recommended, if you are planning on pushing the patches into your production environment to “see how it goes”, then install Login PI now to get an accurate representation of performance related to user experience. This will give you the ability to then compare to that same experience after the patches have been installed. We expect that you will see latency to the end user increase as a result of higher CPU utilization. If you already struggle with CPU utilization in your solution, there is a good chance you’ll be also using Login PI to test your availability.

As we complete our testing we will be sharing our findings in a series of articles.

If your computer has a vulnerable processor and runs an unpatched operating system, it is NOT SAFE TO WORK WITH SENSITIVE INFORMATION”. – Security Experts who discovered Meltdown / Spectre 

If sensitive data is part of your business (Such as ours!) patching is not a matter of if, but when.

Ask yourself:

How long can you afford to have your company’s data exposed to malicious intent?  Do you want to be the next Equifax or Target?

In this article series, we will provide some insight from our lab environments. Be aware your results may vary based upon individual workload and configuration.

Microsoft has released a Security Advisory

The vulnerability affects both the client and server OSs of Windows.  This is compounded when dealing with large-scale published application and desktops deployments.  The advisor can be found at the following location:

https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/ADV180002

The specific details addressed in the security update and Windows KB are outlined in the Common Vulnerabilities and Exposures database.

Included are:

To completely protect yourself there are two phases of patching this vulnerability.

1 – Windows OS updates

2 – OEM device manufacturer firmware updates (not yet available)

Microsoft acknowledges addressing these vulnerabilities from a software perspective is limited, and therefore, without the OEMs providing updates the loop is not closed.

In the interim we can start measuring the impact of the Microsoft fixes.

They offer guidance for both Desktop and Server OSs:

Desktop –  January 2018 Security Update. Security Advisory: Click Here!

Server –  KB405690. Security Advisory: Click Here!

NOTE – Certain AV solutions are not compatible with the security update released by Microsoft. As such, unless an AV vendor has a registry flag, QualityCompat, they will not receive the January Security update and will still be vulnerable

With the upcoming OEM hardware patch releases we expect to be able to produce a variety of interesting and informative results.  Please stay tuned for the next articles!

Reference materials:

https://meltdownattack.com/

https://www.theregister.co.uk/2018/01/09/meltdown_spectre_slowdown/

https://www.geekwire.com/2018/microsoft-admits-meltdown-spectre-patches-will-hit-windows-server-performance/

Citrix Cloud Testing on Amazon EC2 M4


Citrix Cloud on AWS

I was recently afforded the unique opportunity to collaborate on a project to test capacity out of a Citrix XenApp on AWS deployment. The goal of the project was to independently determine the maximum user density for a few different EC2 instance types running XenApp 7.14.

EC2 instances are on-demand and elastic hosted server resources. Which means that they are provisioned dynamically within a pool of available resources, and with an OS you deploy ontop. Amazon provides a variety of templates to easily install Windows, Linux or your other favorite OS. EC2 instances are broken down into a few varieties. They are optimized for storage, memory, compute or graphics. The designation before the name of the instances illustrates their configuration. G3 indicates graphics optimized instance third generation.

The other difference between instance type is the cost. If you are provisioning a 2vCPU 4GB of RAM machine the price per hour would be significantly less than that of a 16vCPU 64GB of RAM machine.

1st

This would allow the customer to match the exact machine size to the purpose of their deployment, and optimize the amount of money they were spending on their hosted application solution.

Utilizing Login VSI’s virtual users I ran a predetermined user count against a Citrix XenApp deployment managed from Citrix Cloud.

For this blog, I will only discuss one data point, and the Citrix Cloud configuration on AWS. We have a significant amount of results, and we will make those available on www.loginvsi.com/blog.

For those of you not familiar Citrix Cloud is providing Citrix capabilities traditionally delivered on premise through a HTML web based user experience therefore installing a receiver is no longer required.

Some of the key components as they move into their cloud forward offerings are StoreFront / Netscaler and Studio.

2nd

StoreFront and NetScaler are completely managed now through a web page. This completely removes the administrator’s responsibilities of configuring hardware / software solutions for Citrix. You simply fire this up, attach it via their “Citrix Cloud Connector” and configure to start deploying your desktops or apps. It works completely flawlessly.

Studio is managed through the connector as well, and provides the Citrix HTML 5 receiver for management access through the Citrix Cloud web portal.

During my time working with it, it proved to be very flexible, easy to configure and reliable for all testing. I would recommend this to any administrator looking at future proofing their Citrix deployments. It is truly ready for market.

Some images below of the management interface:Some images below of the management interface:

There will be a management icon within your Citrix Cloud Dashboard. Select “XenApp and XenDesktop Service” “manage”

3rd

You will then go to the management interface for XenApp / XenDesktop; you have two options Creation and Delivery. Creation – Studio / Delivery – StoreFront / NetScaler:

4th

Management interface for Studio. Notice the Citrix Receiver icon in the middle. Studio is provided through the Citrix HTML 5 receiver. Interesting touch.

5th

Management for Citrix NetScaler / StoreFront:

6th

AWS Configuration for demonstration purposes:

7th

Color coded

8th

Delivery group configuration:

9th

11th

There is only one XenApp host in each delivery group. This is to determine the maximum amount of users for one M4.

2XLarge instance backing the XenApp host. We are delivering Office 2016 applications, and the standard set of VSI Knowledge worker actions.

It is very easy to change the instance type in EC2. You simply select the “Instance” and change the “Instance Type” through the context menu.AWS_Change_Instance_Types

There are a variety of different configuration, which allows you to really get the most out of the testing. If you are aiming for user density numbers you can size it exactly. This allows you to pay for EXACTLY what you need as opposed to over provisioning. This will help drive the cost of VDI / SBC deployments down ultimately, and increase end user experience quality.

If you are sizing your images with Login VSI and backing them up with EC2 AWS instances you are getting an optimal user experience exactly sized right for your needs.

Information on instances:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html

VSI Results

12th

Testing Configuration

For our testing purpose we provisioned a m4.2xlarge machine on EC2. This instance has a machine profile of 8 vCPU and 32 GB of memory. This is either running a XENO E5-2686 or 2676. Mostly a general use machine which is balanced.

Our testing configuration was 50 test users over the course of 48 minutes. We utilized the industry standard Knowledge Workload. This mostly presents a large portion of the VDI / SBC user base. Office application and standard office applications like Adobe Reader.

 

Application start times are all over the place for the most part, but staying for the most part under 12 seconds. Which would be reasonable for the users. Login process takes under 16 seconds even under VSI Max settings.

 

What does the backend look like?

16th

When the CPU is at 100% the VSIMax is being reached within the user session. This means the numbers are indicating the bottleneck to be the CPU provisioned for the M4.2Xlarge instance which is approximately.

Wrap-up

Seeing is believing and after testing it I can confirm that Amazon EC2 is ready for the prime time. We were able to support 42 concurrent users on a M4.2Xlarge and we were able to have a continuous level of excellent user experience while doing so.

Amazon is ready to supplement your traditional on premise solutions with readily available and quickly scalable resources in the cloud. Using Citrix Cloud services you can very easily scale your delivery out to support your user base as it dynamically changes.

Using VSI you can validate your configurations with support your users and put a check box next to user experience.

Using these three solutions you can future proof your company, and deliver on a promise of value & experience

Finally, if you are looking for some testing for your deployment please reach out to me here or b.martynowicz@loginvsi.com.

As always stay tuned for more results.

Citrix AppDisk – All that you need to know!!


AppDisk is an awesome technology from Citrix but it comes with its own quirks which admins/consultants should be aware of. Below are some of the items that i thought are important to know about the technology and how to set it up.

There are a few things to keep in mind before attempting to create an AppDisk.

firefox_screenshot_2016-11-29t00-48-41-888z

AppDisk additional permissions

  • when you specify a size for the AppDisk, you wouldn’t be able to utilize all the size that you allocated. for eg, for an AppDisk size of 5 GB only, 3.66 are useable so always give some extra when creating appdisks
  • Don’t create snapshots of the machine prior to creating the AppDisk when using MCS Catalogs
  • There is currently no way to resize the AppDisk from within the Studio. PowerShell is the way to go.
  • There is NO versioning built into AppDisks at this stage. All that you are doing when clicking on “Create New Version” is creating a clone of the existing AppDisk which could be used to edit and update the AppDisk
  • Enable both the Shadow Copy and Microsoft Software Shadow Copy Service Provider services. https://support.citrix.com/article/CTX211853
  • Some of the commands that you will find useful when working with AppDisks are as follows
>Get-AppLibAppDisk
>Get-AppLibTask

To get a list of all the active tasks running, run the below

>Get-AppLibTask -active $true

To stop a particular task, run the Get-AppLibTask and take a note of the task ID

>Stop-AppLibTask

The above stop command will not remove the failed task from the Studio console. to remove it completely from the studio, run the following command

>Remove-AppLibTask
  • In many cases, AppDisks work on different OSs. For example, you can add an AppDisk that was created on a Windows 7 VM to a Delivery Group containing Windows 2008 R2 machines, as long as both OSs have the same bitness (32 bit or 64 bit) and both support the application. However, Citrix recommends you do not add an AppDisk created on a later OS version (such as Windows 10) to a Delivery Group containing machines running an earlier OS version (such as Windows 7), because it might not work correctly.
  • Finally, the link here from Citrix is a MUST READ as it covers a lot of information on MCS type and PVS type deployments https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-8/install-configure/appdisks.html

Creation Process

  • Boot the reserved VM into the Maintenance environment and leave it at the login screen
  • Head to the Studio console and select the AppDisk node. Click Create AppDisk
  • Specify the size of the disk and a name of the AppDisk in the wizard.
  • As soon as the AppDisk creation begins, the VM will be restarted. Boot the VM back into the Maintenance vDisk
  • Now wait for the process to complete
  • In the mean time, you would be able to see a drive mapping with label (Citrix) being created on the VM with the specified disk size of the AppDisk (5 GB in my case)drive-map
  • Refresh the Studio console to ensure that the VM is powered ON and is registered.studio
  • Be patient as this could take while to complete.
  • If the process gets stuck at “Creating…..” state, run the command
    Get-AppLibTask -active $true

    Check the value of TaskProgress and if it is at 95%, its time to restart the VM. studio

  • Once restarted, boot the machine back into the Maintenance disk
  • Ensure that the VM is registered. Login to the VDA now and make sure that the AV agent isnt running (I have seen that logging into the server helps speed up things)
  • The AppDisk creation process should now be complete.
  • Its time now to install the applications- Right click the AppDisk name and select Install Applications
  • Once you are happy with the app install, its time to seal the disk
  • Right click and select “Seal AppDisk”
  • When the sealing process is started, the VM will restart. Just ensure that the VM restarts back into the maintenance disk
  • Once the server is back up, log into the server to speed up the sealing process. if there are AV agents running, temporarily disable it
  • The VM will restart again
  • choose the maintenance disk again and boot into it
  • it is at this stage, it will start AppDNA disk analysis (assuming that you have AppDNA integration configured)importappdiskdna
  • Refresh the Studio now and you can see the Appdisk is at Ready(AppDNA:Capturing) state
  • Soon the process should complete. The AppDisk should now be ready for app delivery readystate
  • Head on to the PVS console and delete the Maintenance vDisk that was initially created for AppDisk . Once the AppDisk is sealed, you MUST boot into the vDisk version before the Maintenence version to be able to see the applications installed on the AppDisk. Strange but true 🙂
  • If you need to edit(add more apps) a Sealed Appdisk, create a fresh Maintenance vDisk and continue with the updates. The older Maintenance vDisks will not work once sealed and should be removed from PVS console (Versioning)

 

Assigning an AppDisk

As previously stated, AppDisks require a machine catalog that isnt assigned to any delivery groups. So naturally the first step after creating an AppDisk is to create a delivery group and attach the AppDisk to it.

Updating an AppDisk

  • Create a Maintenance vDisk from the PVS Console
  • Change the VM type to Maintenance in the PVS Console (Device Collections)
  • If the Prep machine is already a member of a Delivery group, remove it from the delivery group.
  • Boot the Prep VM into the Maintenance vDisk and leave it at the login screen
  • Go to AppDisk node in Citrix Studio and select the AppDisk that needs to be updated.
  • Choose Create New Version
  • Give it a name and select the Machine catalog name where the prep machine resides
  • Click Create New Version
  • At this point, it creates a Control Disk

control-disk

  • The Prep VM will now restart. the next step is to “Reserve” the Virtual machine
  • Boot the VM back into the Maintenance vDisk

reservevm

  • It then proceeds with the Layer creation and completes it. It would say ready to install applications in Studio
  • Proceed to install applications as you would normally do
  • Seal the Appdisks when completed.
  • Delete the Maintenance version from the PVS console and change the VM type to Production from Maintenance

 

Diagnosing issues with AppDisk

AppDisks come with a logging tool that could be found here at C:\Program Files\Citrix\personal vDisk\bin\CtxAppDisksDiag.exe

Run the above tool as an admin and select the folder where you would like to see the log files and click OK

 

Importing an AppDisk

There are times you will need to import a pre-created AppDisk to the Studio. This method will also work for the manually built virtual machines.

Carl Stalhood has detailed the process to import AppDisks in his blog post here

The curious case of NetScaler access with error message ” The Connection to “Desktop” failed with status (Unknown client error 1110)”


I was pulled into to look at a problem for one of our customers with their Netscalers which stopped the user connections intermittently throwing a very “helpful” error message ” the connection to the desktop failed with status (unknown client error 1110).

The customer description was “it only started to happen a few weeks ago and these days its quite impossible to land a successful connection from the outside of our corporate network”

I managed to get a couple of screenshots of error messages from the users and they appeared like below. When queried, the internal access via Storefront is working fine.

image001

Looking at the error message, there are a multitude of reasons why you would get that and i am outlining the common areas to look in such cases.

  • Check if the Root certificates and intermediate certificates are available on the client devices. If frequently patched, the client will most probably have the latest and update Root CA’s from various public CAs. Check the IE’s / Other browsers’ certificate store to verify the Root and Intermediate CA SSL certs
  • If using non-IE browsers for connectivity, switch over to IE to see if it connects. IE is the safest bet when it comes to connectivity to Citrix environments.
  • Check for SSL ciphers attached to the NetScaler Gateway vServer. If high security ciphers are used, this issue may occur. relax the cipher suites to see if that makes a difference. Again, if cipher suites are an issue, the issue will occur every single time when you connect and not sporadically.
  • Check the STAs on the NetScaler and ensure that it matches with the STAs configured on the  WI/Storefront. This is one of the most important setting to check and probably the first one to check if the issue occurs only sporadically. There is a high possibility of an STA mismatch as it turned out to be in my case.
  • Check the FW from the NetScaler to the VDA – As the title says ensure that the Citrix ports to the VDA are open from the Netscaler