Extract the Application Groups from a Delivery Group – XenApp/XenDesktop 7.x


This is going to be a quick post to explain how you can extract all the AD groups that are currently being used for the various applications that you serve in XenApp/XenDesktop 7.x farms. This came very handy when I had a large number of applications that need to be migrated to a new XenApp 7.15 LTSR farm. This will also come very handy for documentation purposes.

First up, you will need to find the Delivery Group Name UUID that you need to extract the details from. If you have multiple delivery groups, you will need to find the UUIDs for all the Delivery groups.

To find the UUID, run the command below in a PowerShell window in admin mode

asnp Citrix*
Get-BrokerDesktopGroup

This returns the details of all the Delivery groups in the XenApp farm.

Take a note of the UUID value

Now run the below to show the application names and the assigned user AD groups

Get-BrokerApplication -AssociatedDesktopGroupUUID 918bd477-6848-4d27-b98d-28296e78d6a1 | select ApplicationName,AssociatedUserFullNames

27

You can get all sorts of results by changing the filters. To know the available filters, please refer here

Or simply run the below which shows the various filters that you can use for a given application

Get-BrokerApplication

That’s it for now. I hope this helps someone with their PowerShell journey in Citrix

Storefront Load balancing using NetScaler


It’s been a while since I wrote on my blog so let’s get straight into the post without much mucking around. This time we will discuss how to go about setting up Storefront load balancing using NetScalers. This can be configured on a standalone NetScaler or a NetScaler pair in HA. The recommendation is obviously to get this setup on a HA NetScaler pair so that NetScaler outage wouldn’t result in Storefront also being unavailable.

My Storefront version is 3.11 and have a cluster with 2 Storefront servers. NetScaler version is 11.1 but the NS version shouldn’t matter much as the steps would be more or less the same for other NetScaler firmware versions – newer or older. (unless you are too far behind)

Pre-Requisites

To configure Storefront load balancing we need the following –

  • 2 or more Storefront servers
  • an IP address for the virtual server that hosts the LB configuration
  • SSL certificate that points to the intended load balanced URL of Storefront – the certificate can be a wild card or a named certificate

First Things First

Logon to your NetScaler and navigate to System — Settings — Configure Basic Features. Ensure that Load Balancing is selected, if not select it and click OK

1

NetScaler Configuration

Create Servers

Now, navigate to Traffic Management — Load Balancing — Servers. Click Add

2

Give the Storefront server a name and enter the IP address of the server. Ensure that “Enable after creating” is selected. Click Create

Add the second Storefront server following the above steps. If you have more than 2  servers, add all of them.

3

Create Monitors

New NetScaler version come with a built-in Storefront monitor so we are going to make use of it here. Go to Traffic Management –Load Balancing — Monitors and click Add

Here I am only going to create a single monitor to probe all my Storefront servers. You can choose to create multiple monitors depending upon the number of Storefront servers that you have. In my case, i will create just one.

Give a name to the monitor and select the type as STOREFRONT

5

Now select Special Parameters tab and provide the name of the Store that you have created in Storefront. Check the 2 entries – Storefront Account Service and Check Back End Services. 

4

Click on the Standard Parameters tab. Ensure that Secure is selected as below. Click Create

6

Create Service Groups

Go to Traffic Management –Load Balancing — Service Groups

Give a name to the service group and select the protocol as SSL. Check the entries below

  • State
  • Health Monitoring
  • AppFlow Logging (only if you have NetScaler MAS in your environment)

Click OK

7

Under Service Group Members, add the server entities that we created earlier. Once done, they will look like the below

8

Under Settings, type the Header as X-Forwarded-For

9

Under Monitors, bind the monitor that we created before

10

Under SSL Parameters, setup the settings as below

11

Under Ciphers, setup the ciphers based on your company security policy.

12

Once done, Service Group for Storefront should look like this

13

Now, it’s time to create the Virtual Server

Virtual Server

As mentioned in the pre-requisites section , we need an IP address for this. If the NetScalers are sitting in the DMZ, a DMZ IP address is required. In my case, NetScalers are hosted internally so i will use an internal unused IP address.

We will also need the SSL certificate here.

Go to Traffic Management –Load Balancing — Virtual Servers

Click Add

Give a Name to the virtual server and select the protocol as SSL

Specify the IP address under IP Address field and specify the port # as 443

14

Click More and specify the settings as below (note, that AppFlow logging only needs to be enabled if you have a NetScaler MAS setup or other monitoring solutions that could make use of AppFlow logs)

15

 

Under Services and Service Groups, click on Load Balancing Virtual Server ServiceGroup Binding

Click Add Binding and select the Service Group that you created in the previous step. Click OK

Once completed, the page should look like the below. Click Close and click Done

16

It’s time to attach the certificate. Go to Traffic Management — SSL — Manage Certificates / Keys / CSRs

17

 

Click on Upload button and upload your certificate file to NetScaler

Go to Traffic Management — SSL — Certificates — Server certificates

Under Certificate, click on Server Certificate and then Install

Give a certificate key-pair name and choose the certificate that was just uploaded in the previous step. Click Install

Now, go back to Traffic Management –Load Balancing — Virtual Servers

Select the Virtual server created for Storefront and click Edit. Under Certificates, select Server Certificate and then Click Add Binding

Under SSL Ciphers, select the ciphers that you would like to be in place. I am going with the default one. This is not the most secure for a production setup so go with something that’s secure enough for your organization.

Under SSL Parameters, configure the settings as below. Click OK

18

Under Method, Select LEASTRESPONSETIME for the Load Balancing Method. Configure a Backup LB Method, I choose LEAST CONNECTION

You can read more about the LB Methods here

19

Click OK

Under Persistence, select COOKIEINSERT for Persistence with a time-out value of 0. You can also read why I selected the timeout value of 0 here

Under Backup persistence, select SOURCEIP with a timeout of 60. Fill in the Netmask as in the picture

20

Click OK and then Done

We have now completed almost 90% of the config. There are a couple of things left so hold on tight.

The configuration so far will ensure that load balancing will be performed between the Storefront servers ( I know, i know I haven’t setup the DNS entries for the load balanced VIP)

If someone type in the http URL of LB Storefront in their browser, it will not go anywhere. It will show them the IIS page instead. So how do we ensure that the users are redirected to the correct Storefront page (https version) every single time? We will setup another virtual server on port 80 with a redirect URL configured.

Let’s do that now.

Under Traffic Management –Load Balancing — Virtual Servers, Click Add

Under Basic Settings, give the virtual server a Name and select protocol as HTTP

Specify the same IP address as for the Storefront LB VIP and provide 80 for the Port #

Click OK/Create

Under Persistence, select SOURCEIP with a timeout of 2 mins

21

Click OK

Under Protection, type in the correct HTTPS URL that you would want the users to be redirected to under Redirect URL field

22

Click OK. Then click Done

You will notice that the virtual server will be marked as down

23

DNS Changes

Now head over to the DNS server and open the DNS Console

Create an A record pointing to the Storefront LB name with the IP address configured on the vServer for LB configuration.

Storefront Changes

This is the last step, I promise. Head over to the Storefront servers and it’s time now to run some Powershell commands

Now, the monitors that we created earlier will be marked as Down if we didn’t perform this step prior to creating them on the NetScaler. That’s because the monitor created was based on HTTPS and by default, Storefront monitoring is done on HTTP

To change this to HTTPS. We need to configure the monitor service to use HTTPS instead. On all the StoreFront 3.0 servers perform the following steps.

Run PowerShell as an administrator.

Navigate to the Scripts (C:\Program Files\Citrix\Receiver StoreFront\Scripts) folder via the Powershell on the Storefront server,

Run ImportModules.ps1

24

Run the below command

 Get-DSServiceMonitorFeature

25

Now, type the below to setup the Storefront Monitor on HTTPS

Set-DSServiceMonitorFeature -ServiceURL https://localhost:443/StoreFrontMonitor

 

Repeat the above steps on all the Storefront servers.

Now, head back to the NetScaler and you can see that the monitor will be in GREEN and showing a status of UP

That’s all we need to do to setup Storefront load balancing using NetScalers.

 

 

 

 

Don’t let your user-experience be a “Spectre” of itself after “Meltdown”


Bust your ghosts not your user experience

The names Spectre and Meltdown invoke feelings of dread in even the most seasoned IT engineer.  To those uninitiated, let me get you up-to-speed quickly.

Spectre is a vulnerability that takes advantage of “Intel Privilege Escalation and Speculative Execution”, and exposes user memory of an application to another malicious application.  This can expose data such as passwords.

Meltdown is a vulnerability that takes advantage of “Branch prediction and Speculative Execution”, and exposes kernel memory.  A compromised server or client OS running virtualized could gain access to kernel memory of the host exposing all guest data.

Both vulnerabilities take advantage of a 20-year-old method of increasing processor performance.

Server_Protection

As a result, code will need to be updated to address these vulnerabilities at OS and OEM-manufacturer levels, at the expense of system performance.

On their part, Microsoft reluctantly admits that performance will suffer.  “Windows Server on any silicon, especially in any IO-intensive application, shows a more significant performance impact when you enable the mitigations to isolate untrusted code within a Windows Server instance,” wrote Terry Myerson, Executive Vice President for the Windows and Devices group.

According to Geek Wire, these two vulnerabilities which take advantage of a 20-year-old design flaw in modern processors can be “mitigated;” the word we’re apparently using to describe this new world in 2018, in which servers lose roughly 10 to 20% performance for several common workloads.

This affects not only workloads executed against local, on-site resources but even those utilizing services, such as AWS, Google Public Cloud or Azure.

cpu_utilReader submission @ The Register showing CPU before / after patches

We’ve heard from some of our insiders who use Login VSI to validate system performance that they’re seeing a reduction of 5% in user-density after performing Microsoft recommendations. Knowing that the vulnerability wasn’t solved by OS updates alone we, at Login VSI, wanted the ability to test the impending hardware vendor firmware / BIOS changes.

Now is the time to capture your baseline performance

How do you know how much of an impact the fixes for Spectre and Meltdown will be if you don’t have anything to compare it to? Keep in mind that these patches will need to be installed on a number of systems in your solution including server hardware, operating systems, storage subsystems and so on.

Many of our customers perform tests where they compare a known good solution, or a baseline, with changes that have been made. This gives them the ability to accurately assess the performance impact of that change, which in turn allows them to compensate with more hardware, or further tuning of the applications and OS. The patented methods used by Login VSI provide a quantifiable result for determining the impact of a change in virtual desktop and published application environments.

Using Login VSI

If you wish to test the changes before pushing them into your production environment, then use Login VSI to put a load, representative of your production users, on the system. This will objectively show how much more CPU will be used as a result of the Spectre or Meltdown patches. It is expected that the end users will incur increased latency to their applications and desktops as a result of the higher CPU utilization.

Using Login PI

While it is not recommended, if you are planning on pushing the patches into your production environment to “see how it goes”, then install Login PI now to get an accurate representation of performance related to user experience. This will give you the ability to then compare to that same experience after the patches have been installed. We expect that you will see latency to the end user increase as a result of higher CPU utilization. If you already struggle with CPU utilization in your solution, there is a good chance you’ll be also using Login PI to test your availability.

As we complete our testing we will be sharing our findings in a series of articles.

If your computer has a vulnerable processor and runs an unpatched operating system, it is NOT SAFE TO WORK WITH SENSITIVE INFORMATION”. – Security Experts who discovered Meltdown / Spectre 

If sensitive data is part of your business (Such as ours!) patching is not a matter of if, but when.

Ask yourself:

How long can you afford to have your company’s data exposed to malicious intent?  Do you want to be the next Equifax or Target?

In this article series, we will provide some insight from our lab environments. Be aware your results may vary based upon individual workload and configuration.

Microsoft has released a Security Advisory

The vulnerability affects both the client and server OSs of Windows.  This is compounded when dealing with large-scale published application and desktops deployments.  The advisor can be found at the following location:

https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/ADV180002

The specific details addressed in the security update and Windows KB are outlined in the Common Vulnerabilities and Exposures database.

Included are:

To completely protect yourself there are two phases of patching this vulnerability.

1 – Windows OS updates

2 – OEM device manufacturer firmware updates (not yet available)

Microsoft acknowledges addressing these vulnerabilities from a software perspective is limited, and therefore, without the OEMs providing updates the loop is not closed.

In the interim we can start measuring the impact of the Microsoft fixes.

They offer guidance for both Desktop and Server OSs:

Desktop –  January 2018 Security Update. Security Advisory: Click Here!

Server –  KB405690. Security Advisory: Click Here!

NOTE – Certain AV solutions are not compatible with the security update released by Microsoft. As such, unless an AV vendor has a registry flag, QualityCompat, they will not receive the January Security update and will still be vulnerable

With the upcoming OEM hardware patch releases we expect to be able to produce a variety of interesting and informative results.  Please stay tuned for the next articles!

Reference materials:

https://meltdownattack.com/

https://www.theregister.co.uk/2018/01/09/meltdown_spectre_slowdown/

https://www.geekwire.com/2018/microsoft-admits-meltdown-spectre-patches-will-hit-windows-server-performance/

Citrix Cloud Testing on Amazon EC2 M4


Citrix Cloud on AWS

I was recently afforded the unique opportunity to collaborate on a project to test capacity out of a Citrix XenApp on AWS deployment. The goal of the project was to independently determine the maximum user density for a few different EC2 instance types running XenApp 7.14.

EC2 instances are on-demand and elastic hosted server resources. Which means that they are provisioned dynamically within a pool of available resources, and with an OS you deploy ontop. Amazon provides a variety of templates to easily install Windows, Linux or your other favorite OS. EC2 instances are broken down into a few varieties. They are optimized for storage, memory, compute or graphics. The designation before the name of the instances illustrates their configuration. G3 indicates graphics optimized instance third generation.

The other difference between instance type is the cost. If you are provisioning a 2vCPU 4GB of RAM machine the price per hour would be significantly less than that of a 16vCPU 64GB of RAM machine.

1st

This would allow the customer to match the exact machine size to the purpose of their deployment, and optimize the amount of money they were spending on their hosted application solution.

Utilizing Login VSI’s virtual users I ran a predetermined user count against a Citrix XenApp deployment managed from Citrix Cloud.

For this blog, I will only discuss one data point, and the Citrix Cloud configuration on AWS. We have a significant amount of results, and we will make those available on www.loginvsi.com/blog.

For those of you not familiar Citrix Cloud is providing Citrix capabilities traditionally delivered on premise through a HTML web based user experience therefore installing a receiver is no longer required.

Some of the key components as they move into their cloud forward offerings are StoreFront / Netscaler and Studio.

2nd

StoreFront and NetScaler are completely managed now through a web page. This completely removes the administrator’s responsibilities of configuring hardware / software solutions for Citrix. You simply fire this up, attach it via their “Citrix Cloud Connector” and configure to start deploying your desktops or apps. It works completely flawlessly.

Studio is managed through the connector as well, and provides the Citrix HTML 5 receiver for management access through the Citrix Cloud web portal.

During my time working with it, it proved to be very flexible, easy to configure and reliable for all testing. I would recommend this to any administrator looking at future proofing their Citrix deployments. It is truly ready for market.

Some images below of the management interface:Some images below of the management interface:

There will be a management icon within your Citrix Cloud Dashboard. Select “XenApp and XenDesktop Service” “manage”

3rd

You will then go to the management interface for XenApp / XenDesktop; you have two options Creation and Delivery. Creation – Studio / Delivery – StoreFront / NetScaler:

4th

Management interface for Studio. Notice the Citrix Receiver icon in the middle. Studio is provided through the Citrix HTML 5 receiver. Interesting touch.

5th

Management for Citrix NetScaler / StoreFront:

6th

AWS Configuration for demonstration purposes:

7th

Color coded

8th

Delivery group configuration:

9th

11th

There is only one XenApp host in each delivery group. This is to determine the maximum amount of users for one M4.

2XLarge instance backing the XenApp host. We are delivering Office 2016 applications, and the standard set of VSI Knowledge worker actions.

It is very easy to change the instance type in EC2. You simply select the “Instance” and change the “Instance Type” through the context menu.AWS_Change_Instance_Types

There are a variety of different configuration, which allows you to really get the most out of the testing. If you are aiming for user density numbers you can size it exactly. This allows you to pay for EXACTLY what you need as opposed to over provisioning. This will help drive the cost of VDI / SBC deployments down ultimately, and increase end user experience quality.

If you are sizing your images with Login VSI and backing them up with EC2 AWS instances you are getting an optimal user experience exactly sized right for your needs.

Information on instances:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html

VSI Results

12th

Testing Configuration

For our testing purpose we provisioned a m4.2xlarge machine on EC2. This instance has a machine profile of 8 vCPU and 32 GB of memory. This is either running a XENO E5-2686 or 2676. Mostly a general use machine which is balanced.

Our testing configuration was 50 test users over the course of 48 minutes. We utilized the industry standard Knowledge Workload. This mostly presents a large portion of the VDI / SBC user base. Office application and standard office applications like Adobe Reader.

 

Application start times are all over the place for the most part, but staying for the most part under 12 seconds. Which would be reasonable for the users. Login process takes under 16 seconds even under VSI Max settings.

 

What does the backend look like?

16th

When the CPU is at 100% the VSIMax is being reached within the user session. This means the numbers are indicating the bottleneck to be the CPU provisioned for the M4.2Xlarge instance which is approximately.

Wrap-up

Seeing is believing and after testing it I can confirm that Amazon EC2 is ready for the prime time. We were able to support 42 concurrent users on a M4.2Xlarge and we were able to have a continuous level of excellent user experience while doing so.

Amazon is ready to supplement your traditional on premise solutions with readily available and quickly scalable resources in the cloud. Using Citrix Cloud services you can very easily scale your delivery out to support your user base as it dynamically changes.

Using VSI you can validate your configurations with support your users and put a check box next to user experience.

Using these three solutions you can future proof your company, and deliver on a promise of value & experience

Finally, if you are looking for some testing for your deployment please reach out to me here or b.martynowicz@loginvsi.com.

As always stay tuned for more results.