Additional scalability recommendations for high-volume installations

Applies to the following products: 
Questionmark Perception
Applies to the following Perception versions: 
Perception 5.7

This section includes contains additional recommendations that you should follow when installing Perception for high-volume scenarios.

The following sections are included:

Assessment block and question block caching

Caching significantly improves the scalability of Perception as it allows the system to construct the assessment from the database once and then reuse it for multiple participants.

You should enable both of these caching types in Enterprise Manager's Server Settings page (see below).

When an item expires (cache duration), it is removed from the cache and future requests for the assessment will be served from the database and the cache will then be reloaded. Under high-load scenarios, this could be triggered by a large number of concurrent users, which will impact load times. Therefore, it is recommended that you set the cache durations in the region of 1 hour to 1 day to reduce the number of times assessments need to be constructed.

To do this:

  1. Login to Enterprise Manager.
  2. Click the Administration tab.
  3. Click Server Management.
  4. Click Server Settings.
  5. In the Cache Settings section, check the Assessment Block cache and Question Block cache options.

  6. Set the Assessment Block cache duration and Question Block cache duration values to values in the region of 1 hour to 1 day.

ASP.Net configuration

As of .NET 3.5 SP1, you should add the following XML to the C:\Windows\Microsoft.NET\Framework\v2.0.50727\Aspnet.config file.





requestQueueLimit="5000" />



In step 2 (Configure PHP), it is recommended that the Max Instances FastCGI setting "be set to 10 times the number of processors you have on the server." For example, if your server has a quad-core CPU (4 processors), then you would enter 40 as the Max Instances value. The correct value is dependent on your hardware and, during load testing, you may find that a higher value achieves greater concurrency.

P_Progress2 table

This table contains the progress of a participant's exam.

There are 6 BLOBS in this table:


The size characteristics of these BLOBS will depend on the structure and features enabled in your assessment(s), but they can become very large.

A_Answer table

This table contains a participant's answers and therefore a row will be inserted for each question in an exam.

The number of rows in a table = Number of Participants x Number of Questions in the assessment

Th A_Answer table also has a number of indexes that are used for reporting that should be proactively managed.

Participant and QABS affinity

The repository's P_MessageCache table stores the serialized QABS response so that it can be accessed by multiple QABS servers in a load-balanced environment.

If you have decided to install QABS and QPLA on the same server (referencing QABS by localhost or and your participant-facing load balancer supports session affinity, then you can create an affinity between QABS and the participant, removing the need for the P_MessageCache feature.

If you have created an affinity between QABS and the participant, then you can disable this feature by setting multipleServers="False" in the adsSettings section of the ServerSettings.config file.

Expensive features that should be avoided in high-volume scenarios

Question randomization

As explained in Assessment block and question block caching, question block caching is an extremely important scalability feature. However, question blocks that contain question randomization are not cached and therefore have a significant (and negative) impact on scalability.

Features that rely heavily on the P_Progress2 table

The following features rely heavily on the P_Progress2 table:

  • Assessment feedback
  • Results Review Assessment
  • Save As You Go

Under high load this table can become a point of contention (see the P_Progress2 table section), and therefore these features should be used sparingly.