About Features Downloads Getting Started Documentation Events Support GitHub

Love VuFind®? Consider becoming a financial supporter. Your support helps build a better VuFind®!

Site Tools


Warning: This page has not been updated in over over a year and may be outdated or deprecated.
administration:testing_performance

Testing Performance

This page is aimed at users who want to test the performance of their VuFind installation. If you are a developer interested in testing functionality of code changes, please see the unit_tests page instead.

There are two main components in VuFind installations that may influence performance:

  • Solr (including Solr settings and hardware Solr is running on)
  • VuFind (including the environment (Apache, PHP settings) and hardware is is running on)

Performance tests may be run with either of those as a target.

Testing with JMeter

Some VuFind libraries have used JMeter to test the limits of their installations. The following articles were recommended by Franck Borel to help new users set up and automate a test plan:

Testing with WCAT

The free WCAT (Web Capacity Analysis Tool) from Microsoft uses a Windows machine to generate requests against a target server. Although originally designed for use with IIS, it can easily be used to test any web server. This can be a useful way of generating load on a VuFind server and examining how the server performs.

Obtain WCAT: 32-bit download, 64-bit download

Sample scenario.txt

This configuration accesses random record pages and does searches for random numbers (record views and one-term searches are most common, but two- and three-term searches are also possible). By using random numbers as search terms, the test is constantly exercising Solr to find new terms; a search for hard-coded words would not realistically test performance since caching would allow unrealistically fast results.

Note: You may need to replace /vufind with a different base path if your VuFind installation lives in a different directory of your web server.

Note: If you are running VuFind under SSL, change “secure = false” to “secure = true” in all the transactions.

scenario
{
	warmup = 1;
	duration = 120;
	cooldown = 20;
	transaction
	{
		id = "onesearch";
		weight = 1000;
		request
		{
			url = "/vufind/Search/Results?lookfor=" + rand("1", "1000000");
			secure = false;
		}
	}

	transaction
	{
		id = "twosearch";
		weight = 500;
		request
		{
			url = "/vufind/Search/Results?lookfor=" + rand("1", "1000000") + "%20" + rand("1", "1000000");
			secure = false;
		}
	}

	transaction
	{
		id = "threesearch";
		weight = 100;
		request
		{
			url = "/vufind/Search/Results?lookfor=" + rand("1", "1000000") + "%20" + rand("1", "1000000") + "%20" + rand("1", "1000000");
			secure = false;
		}
	}

	transaction
	{
		id = "record";
		weight = 1000;
		request
		{
			url = "/vufind/Record/" + rand("1", "1000000");
			secure = false;
		}
	}
}

Sample settings.txt

This configuration allows WCAT to simulate 15 simultaneous users.

settings
{
	virtualclients = 15;
}

Running the Test

This is the most basic procedure for running WCAT:

  • Open two Windows command prompts in Administrator mode.
  • In both Windows, switch (“cd”) to the directory containing WCAT.
  • Make sure your scenario.txt and settings.txt files are present in the directory.
  • In the first window, run this command to start the WCAT controller:
wcctl -s myserver.myuniversity.edu -t scenario.txt -f settings.txt -c 1
  • In the second window, run this command to start the WCAT client:
wcclient localhost

You can use more complex configuration/switches to access more advanced features, like controlling multiple WCAT clients, logging output to a file, etc.

Sample Test Plans

Keystone Library Network

Testing Scenario: Open-JDK 1.7, Solr 4, 23GB index with ~8.8 million records.

JMeter, 400 threads (single source), 40000 single-word queries, each from a randomly pre-sorted CSV word dictionary. Download: solr_kln_00.jmx (dictionary not included)

CPU Cores (2.7GHz) Memory (GB) Garbage Collection Fresh (Req/s) Cached (Req/s)
24 30 ConcMarkSweepGC 327.9 705.9
24 30 G1GC 93.3 167.5
24 30 ParallelGC 320.6 701.2
24 8 ConcMarkSweepGC 205.5 780.0
24 8 G1GC 64.0 147.2
24 8 ParallelGC 250.7 788.3
2 30 ConcMarkSweepGC 81.1 161.4
2 30 G1GC 65.6 123.9
2 30 ParallelGC 75.6 220.9

Note: These are all single runs, so each datapoint may vary greatly.

administration/testing_performance.txt · Last modified: 2015/12/14 19:26 by demiankatz