About Features Downloads Getting Started Documentation Events Support GitHub

Love VuFind®? Consider becoming a financial supporter. Your support helps build a better VuFind®!

Site Tools


Warning: This page has not been updated in over over a year and may be outdated or deprecated.
videos:indexing_xml_records

Video 7: Indexing XML Records

The seventh VuFind® instructional video explains how to import XML records using XSLT, with an emphasis on records that were harvested via OAI-PMH.

Video is available as an mp4 download or through YouTube.

Update Notes

:!: This video was recorded using VuFind 6.1. In VuFind 8.0, changes were made which impact the content of this video:

  • The ojs-multirecord.xsl file has been removed, and the standard ojs.xsl file has been updated to handle both the single-record and multi-record cases. All of the information in this video about the advantages and disadvantages of each technique still applies, but it is no longer necessary to make changes to ojs.properties in order to support the multi-record case. The only changes you need to make are in oai.ini, to control how records are harvested.
  • All of the other example XSLT files have been adjusted to support multi-record indexing, so you can apply this technique to records harvested from other systems as well.

Transcript

Welcome to the seventh VuFind tutorial video. This is a continuation of last month's video about OAI PMH, where we learned how to harvest XML records using tools bundled with VuFind. This month, we are going to look at what to do with those records once we have them and talk about indexing XML generally.

The first thing that I should emphasize is that MARC XML is a special exception. You can use VuFind's standard MARC indexing tools, which we talked about several months ago, to import binary MARC and MARC XML. That is much easier than trying to use the tools we talked about today to index MARC, which is of course extremely complex. So don't overdo your work by trying to load MARC using XSLT. There are other tools available, but for everything else, what we talked about today should be helpful.

So, I've already mentioned XSLT, so that was a bit of a spoiler. VuFind uses XSLT for loading XML data into Solr. So, first, I should talk a little bit about what XSLT is. It's short for extensible stylesheet language transformations, and it's a declarative programming language where you build an XML document that tells the XSLT engine how to transform one XML document into another XML document.

There are several versions of XSLT. I believe the language is up to version 3.0 right now, but PHP's built-in XSLT processor only supports version 1.0 of the language. Obviously, I'm not going to teach you XSLT today in five minutes. It's a bit of a project to learn. So, if you do go off and read a tutorial about it, be sure you find one about the original version of the language and not the later ones that add a lot of additional features.

It's perhaps a little unfortunate that PHP doesn't support newer XSLT versions. But this is compensated for quite a bit by the fact that there are bindings between XSLT and PHP. So you can write custom functions in PHP and use them in your XSLT. Whenever there's missing functionality in XSLT, you can usually cover that gap with the PHP function. And VuFind comes packed with a number of example functions for common needs and lots of examples of XSLT as well.

For today's example, I'm going to harvest an OJS journal called Expositions, which is hosted at Villanova. OJS is the Open Journal System, an open-source journal hosting platform that supports OAI PMH. So this is a good example of a real-world system that you can harvest from and index in VuFind. VuFind includes some sample configurations and an XSLT for harvesting from OJS and indexing the resulting data. Again, it's a pretty good simple real-world example.

So I'm going to go to the command line where I'm in my VuFind home directory and just show a couple of files to give you a taste of what this all looks like. All of VuFind's sample XSLT sheets are in the import/XSL subdirectory of your VuFind home. And as you can see, we actually have three different flavors of OJS XSLTs. We have NLM-OJS.XSL, which uses the National Library of Medicine's metadata standard, which is a bit richer than the default OAI DC Dublin Core data. But for today's demonstration, I'm just going to use OJS-XSL, which indexes the Dublin Core.

We also have OJS multi-record which I will show you a little later, so stay tuned for that. But to get things started, I'm just going to show you what the ojs.xsl looks like. As I mentioned, an XSLT is just an XML document, and it really works by pattern matching using XPath, which is a way of specifying particular locations within an XML document.

So within an XSLT, anything that you see that's prefixed with XSL colon is an XSLT command, and anything else is actually output that the XSLT is going to create. The XSLTs in VuFind are all designed to create Solr documents for indexing, which always have a top-level add tag that contains doc tags that contain fields that need to be added to the Solr documents.

So the XSLTs are mostly defining Solr fields and containing rules using XSLT to fill those fields with the appropriate data. For example, to get our unique ID, we're pulling from an XML tag called identifier. We have a hard-coded record format, so this is just putting this literal value into every record, which would enable us to create an OJS specific record driver if we wanted to.

We have an all fields field to index all of the text within the XML document, which uses some XSLT functions to extract that text. We use variables which XSLT supports to pass in institution and collection values. I will show you momentarily how these variables get set. XSLT supports looping for multi-values. So for example, this code here populates VuFind's language field by looping through every Dublin core language tag in the document and for any non-empty values.

It calls a PHP function which translates the strings from two-letter or three-letter codes into all textual representations. Again, I obviously can't go into great depth about how all of this works here, but hopefully, this gives you a little taste. If you go off and read an XSLT tutorial or two, it should make even more sense.

So, the XSLT is only part of what VuFind needs to do XML indexing. The other part is a properties file for the import tool, which tells it not only which XSLT to use but also what custom PHP functions to make available and what values to set for any variables that are used within the XSLT.

Let's look at a properties file that goes with that XSLT. As I just showed you, all of the import properties files live in the import directory, and they all contain lots of comments explaining in detail what all of the settings mean. But just to go through the highlights, of course, there's an XSLT setting. This tells us which XSLT to use. And, as I teased earlier, you see with OJS, you actually have a choice of the regular OJS.xSL, which will index one Dublin Core record at a time, or the OJS multi-record.xSL, which can index a grouping of Dublin Core records all in one file.

The multi-record is much faster. It just requires some extra work when you harvest, and I'm going to show you how to use both of these today. We'll start one at a time, and we'll work our way up to multi-record. You also can expose specific PHP functions directly into the XSLT by just creating a list of functions here.

By default, none of the package configurations do this, but it is a possibility if you want to make PHP functions available to your XSLT. You can also create a class full of custom functions and expose all of them to your XSLT. Most of VuFind's examples just use a VuFind XSLT import class full of static functions for exposing custom behavior, like that string mapping I showed you in the language import. Moving on down, there's the ability to pass the custom classes to XSLT using their fully qualified names with the namespace, but all of VuFind's configurations truncate off the namespace and just expose the base class name, which makes the XSLT a little shorter and more readable. So every time I call a VuFind function, I just say “VuFind::function name” instead of having to type “VuFind/XSLT/import/VuFind”. So that's truncated custom class. Finally, there's a parameter section, and this is where you set the values that are exposed as variables to the XSLT.

So I showed you earlier that the institution and collection fields in the Solr index are getting set to variables, and the variables are set here. So by default, you're going to get institution set to “my university” and collection set to “the JS”. Before I can show you any more of the actual importing process, we're going to need some records to play with. So let me set up the OAI PMH harvesting for expositions. I'm going to edit my local harvest OAI.ini file which we set up on last month's video and just go to the bottom and create a new section. I'm going to call it “expositions”.

So, when I run the harvest, all my records go to the directory called “expositions” under my local harvest directory. The URL is http://expositions.journals.villanova.edu/OAI. Metadata prefix is OAI DC because we want to harvest the basic Dublin Core. And now, some new settings that I didn't show you last time. First of all, inject ID equals identifier. As I mentioned when we talked about OAI PMH, when we harvest using that protocol, we get both records and header data. View find needs a unique identifier for everything it indexes, and the Dublin Core that we get back from OAI PMH doesn't necessarily have any kind of identifier in it. But the OAI PMH headers will always have a unique ID for every record. So, by setting inject ID equals identifier here, we're telling the harvester to take the ID from the OAI PMH header, create an identifier tag inside the XML that you're going to harvest and save to disk, put the ID value in there, and this is how the XSLT I showed you earlier was able to pull an ID from the identifier tag and use it in the index.

So, this is a really important feature of view find's harvester that enables you to harvest just about anything and reliably be able to index it in Solr with a unique ID. But the IDs that you get back from OAI PMH are often extremely verbose, and they would make for ugly and unreadable URLs. So, we also have some settings called ID search and ID replace, which let us use regular expressions to transform the identifiers at the same time that we're injecting them.

So in the case of OJS, the IDs have a long prefix: /oai:ojs.pkp.sfu.ca:/. We don't want to show that to our users, so we're going to replace it with expositions-. This way, everything that we index from Expositions will have a distinctive prefix on the ID, so we don't have to worry about Expositions records clashing with records from other sources. The other thing about this is that there are several slashes in some of the IDs, and slashes in IDs can create problems because slashes have a special meaning in URLs, and it requires extra configuration of your web server to make things work nicely. So let's just get rid of all the slashes as well. We're going to say isSearch[] = '|/|' and isReplace[] = '-' .

Let me explain all of this in whole now that I've typed it all in. ID search and ID replace are repeatable settings in the file. You can have as many pairs of search and replace as you need to transform your IDs. You just have to be sure the brackets on the end of ID search and ID replace, so that when the configuration is read, the multiple values are processed correctly. ID search, as I mentioned, is a regular expression. It uses the Perl-style regular expressions supported in PHP, and those regular expressions require you to start and end the expression for the pattern you're matching with the same character. So, in this first example, where we're getting rid of the OAI OJS prefix, I surrounded it with matching forward slashes because that is a fairly common convention for regular expressions.

But for the second pair, where we want to turn forward slashes into dashes, I can't surround the forward/with forward slashes; that would confuse the regular expression engine. So I just used pipe characters instead so that it has matching characters on the beginning and end of the expression that don't conflict with the internal part. I could have chosen a different character here. It doesn't really matter, but I think pipes look pretty. So there you go.

With all of that in place, we're now ready to harvest Expositions. So now I just need to run a few finds OAI PMH to harvest the Expositions content. So I run PHP harvest/harvest OAI.php and I tell it I want to harvest Expositions, and now I wait as it pulls down a whole bunch of records. 285 records, one for each record in Expositions; each of them is an XML file, and they are all in my local harvest/expositions directory.

So now we're ready to put all these pieces together. We have a directory full of XML files in Dublin Core format. We have an XSLT and a properties file. There is a command-line tool that comes with VuFind called importXSL.php. So it's PHP import/import-XSL.php. And this has a nice –test-only mode that you can use if you want to see what it does without actually writing anything into Solr. So I'm going to use that for the first run here, just to demonstrate what happens. The first parameter to this command is the name of an XML file. So I'm going to choose just one of these files, more or less at random.

So I chose “local/harvest/expositions/1588685192/expositions-article-2486.xml”. That big number at the front is actually just a time stamp, it's the harvester plus the time of harvest on every file download. The second parameter is the name of the properties file. I've configured it to do the import, and I don't need to tell it the path to that file. I just need to tell it the file name because, like many things in VuFind, what it's going to do is it's first going to look in “VuFind/local_dir/import” to see if we have a local customized properties file. If it doesn't find that, it's then going to fall back and look in “VuFind/home/import” and use the default one. So since I haven't customized anything yet, it's just going to go for the defaults.

So I'm going to run this command and it outputs a Solr document which is created by transforming the input. So as you can see, like in all fields, it's just a whole bunch of text. It extracted all the free text from the XML, taking the tags off of it. There's that hard-coded record format of OJS. The ID is that identifier that we injected, and as you can see, it's prefixed with “expositions” like we told it to be, and the/that would have been here has become a dash, so all my regular expressions worked, and here's my university and OJS that came in from those variables that were set in the properties file, and a whole bunch of other stuff. So let's repeat that command but just take the test only off to actually index it into Solr.

The XML import does not immediately commit changes to Solr. If you run the command and search for a record, it won't show up instantly. To ensure that Solr is up to date, run the util/commit.php script to send a Solr commit. I'll do that now to demonstrate that it worked. If I search for all records prior to indexing, I can see there were 250 records at that time, but now there are 251. One record is from my university, which was working from the ojs.properties file. If I click on it to filter down, I can see the non-violence article that we indexed from the XML.

We have more than 200 of these records, and we don't want to have to index them by hand one at a time. Fortunately, there is a script called harvest/batchimport_ssl.sh. It takes the name of a directory under your local harvest path and the name of a property file. It loops through and indexes every single file in that directory using that configuration. This saves lots of typing. As it indexes, it creates a subdirectory of your harvest directory called “processed” and moves those files into the process directory.

The batch process is smart enough that if anything should go wrong during the index, it will not move files that failed to import correctly. So, if I had one bad record in this batch, all the good ones would get successfully indexed and moved into the process directory, but the bad one would stay there. I could then run that test mode I showed you on the one record to see exactly what the error message is that was preventing the transformation or to see if there's a missing required field or something to troubleshoot and fix. The other thing left in the expositions directory is a file called “lastharvest.txt”, which will just contain the date of the last time we ran the OAI PMH harvester. This allows incremental updates.

Now the index process has completed, and if I do a directory listing of local harvester expositions, all that's left is a lastharvest.txt and a process directory. Let's go back to VuFind in our browser and refresh these results. Sure enough, there are now 285 records, all searchable, all with links back to OJS to read the full article. Success! But, you may have noticed that I had to ramble for quite a while while those 200 records indexed. Indexing things one at a time actually takes quite a while, and if you have thousands or tens of thousands of records, it's even worse. That's why the multi-record function I talked about is really handy.

So what I'm going to do is: remove the whole local harvester expositions directory so we can start over and I can show you how much faster this is if we do records and batches instead of one at a time.

First, I'm going to edit my OAI harvesting configuration in local harvests OAI.i. All I need to do is add one more setting at the bottom of this called combine records equals true. This is going to tell harvester instead of writing one Dublin core record into each file, you want to create one file for every batch of records that come back over OAI PMH, and you're going to wrap them in a tag called collection. If you want to use a different tag name, there's another setting you can use for that, but for this example, just turning on combine records and accepting the default tag name of collection is good enough.

The other thing we need to do is set up the obj.properties file to use the combined XSLT file. Let's copy the default import obj.properties into local import because as with everything else, files inside local are going to override defaults in the core code, and let's edit local import obj.properties. I'm just going to comment out ojs.xsl and uncomment OJS multi record.xsl.

Let's just take a quick look at that other XSLT to see what the differences are. So I'm going to edit import/XSL/OJS multi record.xsl. This uses template matching. It's going to match the top-level collection tag, and then it's going to loop through the collection looking for OAI DC and apply templates to each of them in turn. Then there's this OAI DC template, and this code is quite similar to the single record code.

It just matches within the scope of a single OAI DC instead of globally looking for particular tags. This is really probably a better way to approach all XSLT writing. The difference between multi record and single record is that I wrote the single record one when I didn't know what I was doing, and somebody else who's better at XSLT than me wrote the multi record one. So I welcome contributions of multi record import scripts for other metadata formats as well, but I do offer the single and multi record options because there are scenarios where each can be useful. We'll talk about that a little more momentarily.

In any case, I've now showed you the multi record XSLT. I've reconfigured the OAI PMH harvester to harvester in groups, and I've configured OJS properties to use the multi record XSLT. So everything should be aligned correctly. So let's run the OAI PMH harvester again. So php harvester_OAI.php to harvest expositions, and the harvest should take the same amount of time. We're still harvesting the same 285 records, but if I look inside local harvest expositions this time, there are only three files there because the OAI server provided us with three batches of records, and each of those got saved to a single file.

And now, if I were to run the single file import XSL.php script in test only mode on one of these files, you'll see that the output is much longer than before because now, instead of just having one record transform to Solr, we now have a whole collection of records, 285 of them to be precise.

So it goes on and on and on. But the advantage of this is you remember how long it took to batch import the expositions when every file contained only one record. Let me show you how much faster it is when there are only three files containing under records. Each “harvest/batch import XSL” exposition directory OJS operation file one, two, three were done. So that was a dramatic improvement in performance.

The only disadvantage to doing things this way that I can see is that, as I mentioned, the import script will skip files that fail the import. So if I had one corrupted record in this OJS instance and I ran this batch import, one of these three files would fail, and I would know there was a problem with one of the hundred records within that file, but it would be hard to figure out which one had caused the problem. So doing single-record importing may be valuable for troubleshooting purposes if nothing else, and I would suggest that if you do a batch import and you run into trouble, try doing a single import that will probably help you pinpoint the causes of your problems.

I should also note that, as I said, most of the example XSLTs are things I wrote that are designed for a single record at a time. There's still some work to be done creating batch import XSLTs for all the formats. I showed you OJS because that's one where this work has already been done. If anyone needs multi-record import for another format, that's something I would welcome contributions of so that it could be shared with everyone else using the project, and I expect that over time, our repertoire will expand and improve.

So that's it for this month. Thank you for listening, and we'll have more next time.

This is an edited version of an automated transcript. Apologies for any errors.

videos/indexing_xml_records.txt · Last modified: 2023/04/26 13:35 by crhallberg