About Features Downloads Getting Started Documentation Events Support GitHub

Love VuFind®? Consider becoming a financial supporter. Your support helps build a better VuFind®!

Site Tools


Warning: This page has not been updated in over over a year and may be outdated or deprecated.
development:testing:manual_testing

This is an old revision of the document!


Manual Testing

VuFind® uses a set of Unit Tests to check for problems as the software develops and evolves. Automated tests are generally the preferred way to test software, since they are automatic and repeatable. However, sometimes humans can detect problems in areas where automated tests cannot (like aesthetic changes), and sometimes manual testing is useful as a lead-up to developing automated tests. This page contains notes intended to be helpful for manual testers.

General Considerations

Browser Testing

It is a good idea to test the interface in multiple browsers to check for display inconsistencies and Javascript incompatibilities. We are most concerned with compatibility with the current generation of browsers, but compatibility with earlier versions is desirable when possible. Recommended browsers to test:

  • Firefox
  • Edge
  • Chrome
  • Safari
  • Opera

The automated test suite typically runs in Chrome, so doing manual testing in other browsers is a good way to potentially uncover browser-specific issues.

Input Testing

Some general patterns to follow when testing new features:

  • Test fields stored in the database for SQL injection vulnerabilities.
  • Test fields sent as Solr searches for vulnerability to syntax errors.
  • Test fields where input is stored and displayed back for XSS vulnerabilities and missing HTML/URL encoding. A good test is “<script>alert('test');</script>” – if there is an XSS vulnerability, this will cause a pop-up to appear!

If you find something that breaks, please don't just fix it – also build an automated test to prevent regressions!

Testing Integrations

VuFind®'s test suite is not able to connect to third-party systems. While it can simulate these interactions for testing purposes, if third-party systems actually change their interfaces, the test suite will not be able to detect problems. There is no way around this – we do not own copies of every Integrated Library System ever published, or have subscriptions to every third-party resource. Therefore, user testing of integrations is valuable to ensure that our code remains up-to-date and relevant.

Some areas of code that benefit from this type of testing:

  • Export Targets (e.g. EndNote, RefWorks, etc.)
  • ILS Drivers (e.g. Alma, Aleph, etc.)
  • Search Backends (e.g. EDS, Summon, WorldCat, etc.)
  • SMTP integration (e.g. sending emails via various services)
development/testing/manual_testing.1681318583.txt.gz · Last modified: 2023/04/12 16:56 by demiankatz