About Features Downloads Getting Started Documentation Events Support GitHub

Love VuFind®? Consider becoming a financial supporter. Your support helps build a better VuFind®!

Site Tools


Warning: This page has not been updated in over over a year and may be outdated or deprecated.
videos:indexing_xml_records

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
videos:indexing_xml_records [2020/08/07 13:30] – [Transcript] demiankatzvideos:indexing_xml_records [2023/04/26 13:35] (current) crhallberg
Line 1: Line 1:
 ====== Video 7: Indexing XML Records ====== ====== Video 7: Indexing XML Records ======
  
-The seventh VuFind instructional video explains how to import XML records using XSLT, with an emphasis on records that were harvested via OAI-PMH.+The seventh VuFind® instructional video explains how to import XML records using XSLT, with an emphasis on records that were harvested via OAI-PMH.
  
 Video is available as an [[https://vufind.org/video/Ingesting_XML.mp4|mp4 download]] or through [[https://www.youtube.com/watch?v=qzY5nC9PLLQ&feature=youtu.be|YouTube]]. Video is available as an [[https://vufind.org/video/Ingesting_XML.mp4|mp4 download]] or through [[https://www.youtube.com/watch?v=qzY5nC9PLLQ&feature=youtu.be|YouTube]].
Line 8: Line 8:
  
   - [[indexing:xml|XML Indexing wiki page]]   - [[indexing:xml|XML Indexing wiki page]]
 +
 +===== Update Notes =====
 +
 +:!: This video was recorded using VuFind 6.1. In VuFind 8.0, changes were made which impact the content of this video:
 +
 +  * The ojs-multirecord.xsl file has been removed, and the standard ojs.xsl file has been updated to handle both the single-record and multi-record cases. All of the information in this video about the advantages and disadvantages of each technique still applies, but it is no longer necessary to make changes to ojs.properties in order to support the multi-record case. The only changes you need to make are in oai.ini, to control how records are harvested.
 +  * All of the other example XSLT files have been adjusted to support multi-record indexing, so you can apply this technique to records harvested from other systems as well.
  
 ===== Transcript ===== ===== Transcript =====
  
-// This is a raw machine-generated transcript; it will be cleaned up as time permits. //+Welcome to the seventh VuFind tutorial video. This is a continuation of last month's video about OAI PMH, where we learned how to harvest XML records using tools bundled with VuFind. This month, we are going to look at what to do with those records once we have them and talk about indexing XML generally. 
 + 
 +The first thing that I should emphasize is that MARC XML is a special exception. You can use VuFind's standard MARC indexing tools, which we talked about several months ago, to import binary MARC and MARC XML. That is much easier than trying to use the tools we talked about today to index MARC, which is of course extremely complex. So don't overdo your work by trying to load MARC using XSLT. There are other tools available, but for everything else, what we talked about today should be helpful. 
 + 
 +So, I've already mentioned XSLT, so that was a bit of a spoiler. VuFind uses XSLT for loading XML data into Solr. So, first, I should talk a little bit about what XSLT is. It's short for extensible stylesheet language transformations, and it's a declarative programming language where you build an XML document that tells the XSLT engine how to transform one XML document into another XML document. 
 + 
 +There are several versions of XSLT. I believe the language is up to version 3.0 right now, but PHP's built-in XSLT processor only supports version 1.0 of the language. Obviously, I'm not going to teach you XSLT today in five minutes. It's a bit of a project to learn. So, if you do go off and read a tutorial about it, be sure you find one about the original version of the language and not the later ones that add a lot of additional features. 
 + 
 +It's perhaps a little unfortunate that PHP doesn't support newer XSLT versions. But this is compensated for quite a bit by the fact that there are bindings between XSLT and PHP. So you can write custom functions in PHP and use them in your XSLT. Whenever there's missing functionality in XSLT, you can usually cover that gap with the PHP function. And VuFind comes packed with a number of example functions for common needs and lots of examples of XSLT as well. 
 + 
 +For today's example, I'm going to harvest an OJS journal called Expositions, which is hosted at Villanova. OJS is the Open Journal System, an open-source journal hosting platform that supports OAI PMH. So this is a good example of a real-world system that you can harvest from and index in VuFind. VuFind includes some sample configurations and an XSLT for harvesting from OJS and indexing the resulting data. Again, it's a pretty good simple real-world example. 
 + 
 +So I'm going to go to the command line where I'm in my VuFind home directory and just show a couple of files to give you a taste of what this all looks like. All of VuFind's sample XSLT sheets are in the ''import/XSL'' subdirectory of your VuFind home. And as you can see, we actually have three different flavors of OJS XSLTs. We have NLM-OJS.XSL, which uses the National Library of Medicine's metadata standard, which is a bit richer than the default OAI DC Dublin Core data. But for today's demonstration, I'm just going to use OJS-XSL, which indexes the Dublin Core. 
 + 
 +We also have OJS multi-record which I will show you a little later, so stay tuned for that. But to get things started, I'm just going to show you what the ''ojs.xsl'' looks like. As I mentioned, an XSLT is just an XML document, and it really works by pattern matching using XPath, which is a way of specifying particular locations within an XML document. 
 + 
 +So within an XSLT, anything that you see that's prefixed with XSL colon is an XSLT command, and anything else is actually output that the XSLT is going to create. The XSLTs in VuFind are all designed to create Solr documents for indexing, which always have a top-level add tag that contains doc tags that contain fields that need to be added to the Solr documents. 
 + 
 +So the XSLTs are mostly defining Solr fields and containing rules using XSLT to fill those fields with the appropriate data. For example, to get our unique ID, we're pulling from an XML tag called identifier. We have a hard-coded record format, so this is just putting this literal value into every record, which would enable us to create an OJS specific record driver if we wanted to. 
 + 
 +We have an all fields field to index all of the text within the XML document, which uses some XSLT functions to extract that text. We use variables which XSLT supports to pass in institution and collection values. I will show you momentarily how these variables get set. XSLT supports looping for multi-values. So for example, this code here populates VuFind's language field by looping through every Dublin core language tag in the document and for any non-empty values. 
 + 
 +It calls a PHP function which translates the strings from two-letter or three-letter codes into all textual representations. Again, I obviously can't go into great depth about how all of this works here, but hopefully, this gives you a little taste. If you go off and read an XSLT tutorial or two, it should make even more sense. 
 + 
 +So, the XSLT is only part of what VuFind needs to do XML indexing. The other part is a properties file for the import tool, which tells it not only which XSLT to use but also what custom PHP functions to make available and what values to set for any variables that are used within the XSLT. 
 + 
 +Let's look at a properties file that goes with that XSLT. As I just showed you, all of the import properties files live in the import directory, and they all contain lots of comments explaining in detail what all of the settings mean. But just to go through the highlights, of course, there's an XSLT setting. This tells us which XSLT to use. And, as I teased earlier, you see with OJS, you actually have a choice of the regular OJS.xSL, which will index one Dublin Core record at a time, or the OJS multi-record.xSL, which can index a grouping of Dublin Core records all in one file. 
 + 
 +The multi-record is much faster. It just requires some extra work when you harvest, and I'm going to show you how to use both of these today. We'll start one at a time, and we'll work our way up to multi-record. You also can expose specific PHP functions directly into the XSLT by just creating a list of functions here. 
 + 
 +By default, none of the package configurations do this, but it is a possibility if you want to make PHP functions available to your XSLT. You can also create a class full of custom functions and expose all of them to your XSLT. Most of VuFind's examples just use a VuFind XSLT import class full of static functions for exposing custom behavior, like that string mapping I showed you in the language import. Moving on down, there's the ability to pass the custom classes to XSLT using their fully qualified names with the namespace, but all of VuFind's configurations truncate off the namespace and just expose the base class name, which makes the XSLT a little shorter and more readable. So every time I call a VuFind function, I just say "VuFind::function name" instead of having to type "VuFind/XSLT/import/VuFind". So that's truncated custom class. Finally, there's a parameter section, and this is where you set the values that are exposed as variables to the XSLT. 
 + 
 +So I showed you earlier that the institution and collection fields in the Solr index are getting set to variables, and the variables are set here. So by default, you're going to get institution set to "my university" and collection set to "the JS". Before I can show you any more of the actual importing process, we're going to need some records to play with. So let me set up the OAI PMH harvesting for expositions. I'm going to edit my local harvest OAI.ini file which we set up on last month's video and just go to the bottom and create a new section. I'm going to call it "expositions"
 + 
 +So, when I run the harvest, all my records go to the directory called "expositions" under my local harvest directory. The URL is http://expositions.journals.villanova.edu/OAI. Metadata prefix is OAI DC because we want to harvest the basic Dublin Core. And now, some new settings that I didn't show you last time. First of all, inject ID equals identifier. As I mentioned when we talked about OAI PMH, when we harvest using that protocol, we get both records and header data. View find needs a unique identifier for everything it indexes, and the Dublin Core that we get back from OAI PMH doesn't necessarily have any kind of identifier in it. But the OAI PMH headers will always have a unique ID for every record. So, by setting inject ID equals identifier here, we're telling the harvester to take the ID from the OAI PMH header, create an identifier tag inside the XML that you're going to harvest and save to disk, put the ID value in there, and this is how the XSLT I showed you earlier was able to pull an ID from the identifier tag and use it in the index. 
 + 
 +So, this is a really important feature of view find's harvester that enables you to harvest just about anything and reliably be able to index it in Solr with a unique ID. But the IDs that you get back from OAI PMH are often extremely verbose, and they would make for ugly and unreadable URLs. So, we also have some settings called ID search and ID replace, which let us use regular expressions to transform the identifiers at the same time that we're injecting them. 
 + 
 +So in the case of OJS, the IDs have a long prefix: ''/oai:ojs.pkp.sfu.ca:/''. We don't want to show that to our users, so we're going to replace it with ''expositions-''This way, everything that we index from Expositions will have a distinctive prefix on the ID, so we don't have to worry about Expositions records clashing with records from other sources. The other thing about this is that there are several slashes in some of the IDs, and slashes in IDs can create problems because slashes have special meaning in URLs, and it requires extra configuration of your web server to make things work nicely. So let's just get rid of all the slashes as well. We're going to say ''isSearch[] = '|/|' '' and ''isReplace[] = '-' ''
 + 
 +Let me explain all of this in whole now that I've typed it all in. ID search and ID replace are repeatable settings in the file. You can have as many pairs of search and replace as you need to transform your IDs. You just have to be sure the brackets on the end of ID search and ID replace, so that when the configuration is read, the multiple values are processed correctly. ID search, as I mentioned, is a regular expression. It uses the Perl-style regular expressions supported in PHP, and those regular expressions require you to start and end the expression for the pattern you're matching with the same character. So, in this first example, where we're getting rid of the OAI OJS prefix, I surrounded it with matching forward slashes because that is a fairly common convention for regular expressions. 
 + 
 +But for the second pair, where we want to turn forward slashes into dashes, I can't surround the forward/with forward slashesthat would confuse the regular expression engine. So I just used pipe characters instead so that it has matching characters on the beginning and end of the expression that don't conflict with the internal part. I could have chosen a different character here. It doesn't really matter, but I think pipes look pretty. So there you go. 
 + 
 +With all of that in place, we're now ready to harvest Expositions. So now I just need to run a few ''finds OAI PMH'' to harvest the Expositions content. So I run ''PHP harvest/harvest OAI.php'' and I tell it I want to harvest Expositions, and now I wait as it pulls down a whole bunch of records. 285 records, one for each record in Expositions; each of them is an XML file, and they are all in my local ''harvest/expositions'' directory. 
 + 
 +So now we're ready to put all these pieces together. We have a directory full of XML files in Dublin Core format. We have an XSLT and a properties file. There is a command-line tool that comes with VuFind called ''importXSL.php''. So it's ''PHP import/import-XSL.php''. And this has a nice ''--test-only'' mode that you can use if you want to see what it does without actually writing anything into Solr. So I'm going to use that for the first run here, just to demonstrate what happens. The first parameter to this command is the name of an XML file. So I'm going to choose just one of these files, more or less at random. 
 + 
 +So I chose "local/harvest/expositions/1588685192/expositions-article-2486.xml". That big number at the front is actually just a time stamp, it's the harvester plus the time of harvest on every file download. The second parameter is the name of the properties file. I've configured it to do the import, and I don't need to tell it the path to that file. I just need to tell it the file name because, like many things in VuFind, what it's going to do is it's first going to look in "VuFind/local_dir/import" to see if we have a local customized properties file. If it doesn't find that, it's then going to fall back and look in "VuFind/home/import" and use the default one. So since I haven't customized anything yet, it's just going to go for the defaults. 
 + 
 +So I'm going to run this command and it outputs a Solr document which is created by transforming the input. So as you can see, like in all fields, it's just a whole bunch of text. It extracted all the free text from the XML, taking the tags off of it. There's that hard-coded record format of OJS. The ID is that identifier that we injected, and as you can see, it's prefixed with "expositions" like we told it to be, and the/that would have been here has become a dash, so all my regular expressions worked, and here's my university and OJS that came in from those variables that were set in the properties file, and a whole bunch of other stuff. So let's repeat that command but just take the test only off to actually index it into Solr. 
 + 
 +The XML import does not immediately commit changes to Solr. If you run the command and search for a record, it won't show up instantly. To ensure that Solr is up to date, run the util/commit.php script to send a Solr commit. I'll do that now to demonstrate that it worked. If I search for all records prior to indexing, I can see there were 250 records at that time, but now there are 251One record is from my university, which was working from the ''ojs.properties'' file. If I click on it to filter down, I can see the non-violence article that we indexed from the XML. 
 + 
 +We have more than 200 of these records, and we don't want to have to index them by hand one at a time. Fortunately, there is a script called harvest/batchimport_ssl.sh. It takes the name of a directory under your local harvest path and the name of a property file. It loops through and indexes every single file in that directory using that configuration. This saves lots of typing. As it indexes, it creates a subdirectory of your harvest directory called "processed" and moves those files into the process directory. 
 + 
 +The batch process is smart enough that if anything should go wrong during the index, it will not move files that failed to import correctly. So, if I had one bad record in this batch, all the good ones would get successfully indexed and moved into the process directory, but the bad one would stay there. I could then run that test mode I showed you on the one record to see exactly what the error message is that was preventing the transformation or to see if there's a missing required field or something to troubleshoot and fix. The other thing left in the expositions directory is a file called "lastharvest.txt", which will just contain the date of the last time we ran the OAI PMH harvester. This allows incremental updates. 
 + 
 +Now the index process has completed, and if I do a directory listing of local harvester expositions, all that's left is a lastharvest.txt and a process directory. Let's go back to VuFind in our browser and refresh these results. Sure enough, there are now 285 records, all searchable, all with links back to OJS to read the full article. Success! But, you may have noticed that I had to ramble for quite a while while those 200 records indexed. Indexing things one at a time actually takes quite a while, and if you have thousands or tens of thousands of records, it's even worse. That's why the multi-record function I talked about is really handy. 
 + 
 +So what I'm going to do is: remove the whole local harvester expositions directory so we can start over and I can show you how much faster this is if we do records and batches instead of one at a time. 
 + 
 +First, I'm going to edit my OAI harvesting configuration in local harvests OAI.i. All I need to do is add one more setting at the bottom of this called combine records equals true. This is going to tell harvester instead of writing one Dublin core record into each file, you want to create one file for every batch of records that come back over OAI PMH, and you're going to wrap them in a tag called collection. If you want to use a different tag name, there's another setting you can use for that, but for this example, just turning on combine records and accepting the default tag name of collection is good enough. 
 + 
 +The other thing we need to do is set up the ''obj.properties'' file to use the combined XSLT file. Let's copy the default import ''obj.properties'' into local import because as with everything else, files inside local are going to override defaults in the core code, and let's edit local import ''obj.properties''. I'm just going to comment out ''ojs.xsl'' and uncomment OJS multi record.xsl. 
 + 
 +Let's just take a quick look at that other XSLT to see what the differences are. So I'm going to edit import/XSL/OJS multi record.xsl. This uses template matching. It's going to match the top-level collection tag, and then it's going to loop through the collection looking for OAI DC and apply templates to each of them in turn. Then there's this OAI DC template, and this code is quite similar to the single record code. 
 + 
 +It just matches within the scope of a single OAI DC instead of globally looking for particular tags. This is really probably a better way to approach all XSLT writing. The difference between multi record and single record is that I wrote the single record one when I didn't know what I was doing, and somebody else who's better at XSLT than me wrote the multi record one. So I welcome contributions of multi record import scripts for other metadata formats as well, but I do offer the single and multi record options because there are scenarios where each can be useful. We'll talk about that a little more momentarily. 
 + 
 +In any case, I've now showed you the multi record XSLT. I've reconfigured the OAI PMH harvester to harvester in groups, and I've configured OJS properties to use the multi record XSLT. So everything should be aligned correctly. So let's run the OAI PMH harvester again. So ''php harvester_OAI.php'' to harvest expositions, and the harvest should take the same amount of time. We're still harvesting the same 285 records, but if I look inside local harvest expositions this time, there are only three files there because the OAI server provided us with three batches of records, and each of those got saved to a single file. 
 + 
 +And now, if I were to run the single file import XSL.php script in test only mode on one of these files, you'll see that the output is much longer than before because now, instead of just having one record transform to Solr, we now have a whole collection of records, 285 of them to be precise. 
 + 
 +So it goes on and on and on. But the advantage of this is you remember how long it took to batch import the expositions when every file contained only one record. Let me show you how much faster it is when there are only three files containing under records. Each "harvest/batch import XSL" exposition directory OJS operation file one, two, three were done. So that was a dramatic improvement in performance. 
 + 
 +The only disadvantage to doing things this way that I can see is that, as I mentioned, the import script will skip files that fail the import. So if I had one corrupted record in this OJS instance and I ran this batch import, one of these three files would fail, and I would know there was a problem with one of the hundred records within that file, but it would be hard to figure out which one had caused the problem. So doing single-record importing may be valuable for troubleshooting purposes if nothing else, and I would suggest that if you do a batch import and you run into trouble, try doing a single import that will probably help you pinpoint the causes of your problems. 
 + 
 +I should also note that, as I said, most of the example XSLTs are things I wrote that are designed for a single record at a time. There's still some work to be done creating batch import XSLTs for all the formats. I showed you OJS because that's one where this work has already been done. If anyone needs multi-record import for another format, that's something I would welcome contributions of so that it could be shared with everyone else using the project, and I expect that over time, our repertoire will expand and improve. 
 + 
 +So that's it for this month. Thank you for listening, and we'll have more next time.
  
- welcome to the seventh do you find 
-tutorial video this is a continuation of 
-last month's video about oai-pmh where 
-we learned how to harvest Excel records 
-using tools bundled with view find this 
-month we are going to look at what to do 
-with those records once we have them and 
-to talk about indexing XML generally the 
-first thing that I should emphasize is 
-that Mark XML is a special exception you 
-can use view find standard mark indexing 
-tools which we talked about several 
-months ago to import binary mark and 
-Mark XML and that is much easier than 
-trying to use the tools we talked about 
-today to index mark which is of course 
-extremely complex so don't overdo your 
-your work by trying to load mark using 
-XSLT there are other tools available but 
-for everything else what we talked about 
-today should be helpful so I've already 
-mentioned XSLT so that was a bit of a 
-spoiler if you find uses XSLT for 
-loading xml data into solar so first i 
-should talk a little bit about what XSLT 
-is it's short for extensible stylesheet 
-language transformations and it's a 
-declarative programming language where 
-you build an XML document that tells the 
-XSLT engine how to transform one XML 
-document into another XML document um 
-there are several versions of XSLT I 
-believe the language is up to version 
-3.0 right now but PHP is built-in XSLT 
-processor only supports version 1.0 of 
-the language obviously I'm not going to 
-teach you XSLT today in five minutes 
-it's a bit of a project to learn so if 
-you do go off and read a tutorial about 
-it be sure you find one about the 
-original version of the language and not 
-the later ones that add a lot of 
-additional features 
-it's perhaps a little unfortunate that 
-PHP doesn't support newer XSLT versions 
-but this is compensated for quite a bit 
-by the fact that there are bindings 
-between XSLT and PHP so you can write 
-custom functions in PHP and use them in 
-your XSLT so whenever there's missing 
-functionality and XSLT you can usually 
-cover that gap with the PHP function and 
-view find comes packed with a number of 
-example functions for common needs and 
-lots of examples of XSLT as well so for 
-today's example I'm going to harvest in 
-ojs journal called expositions which is 
-hosted at Villanova ojs is the open 
-journal system and open source journal 
-hosting platform and it supports oai-pmh 
-so this is a good example of a real 
-world system that you can harvest from 
-an index interview find and if you find 
-includes some sample configurations and 
-an XSLT for harvesting from evade ojs 
-and indexing the resulting data so again 
-it's it's a pretty good simple real 
-world example so I'm going to go to the 
-command line where I'm in my view find 
-home directory and just show a couple of 
-files to give you a taste of what this 
-all looks like so all of you find sample 
-XSLT sheets in the import / XSL 
-subdirectory you find home and as you 
-can see we actually have three different 
-flavors of ojs XSLT s we have n LM o j 
-sx SL which uses the National Library of 
-medicines metadata standard which is a 
-bit richer than the default o aidc 
-dublin core data but for today's 
-demonstration i'm just going to use a j 
-s XSL which indexes the 
-dublin core we also have a je s 
-multi-record which i will show you a 
-little later so stay tuned for that but 
-to get things started I'm just going to 
-show you what the OJS 
-XSL looks like as I mentioned an XSLT is 
-just an XML document and it really works 
-by pattern matching using XPath which is 
-a way of specifying particular locations 
-within an XML document so within an XSLT 
-anything that you see that's prefixed 
-with XSL : is an XSLT command and 
-anything else is actually output that 
-the XSLT is going to create so the XSLT 
-is in view find are all designed to 
-create solar documents for indexing 
-which always have a top-level ad tag 
-that contains Doc tags that contain 
-fields that need to be added to solar 
-documents so the XSL T's are mostly 
-defining solar fields and containing 
-rules using XSLT to fill those fields 
-with the appropriate data so for example 
-to get our unique ID we're pulling from 
-in XSL tab I mean an XML tag called 
-identifier we have a hard-coded record 
-format so this is just putting this 
-literal value into every record which 
-would enable us to create an OG a 
-specific a record driver if we wanted to 
-we have an all fields field to index all 
-of the text within the XML document 
-which uses some XSLT functions to 
-extract that text 
-we use variables which XSLT supports to 
-pass in institution and collection 
-values I will show you momentarily how 
-these variables get set and XSLT 
-supports looping for multi values so for 
-example this code here populates view 
-finds language field by looping through 
-every dublin core language tag in the 
-document and for any non-empty values it 
-calls a PHP function which translates to 
-the strings from two letter or 
-three-letter codes into all textual 
-representations again I obviously can't 
-go into great depth about all how all of 
-this works here but hopefully this this 
-gives you a little taste and if you go 
-off and read an XSLT tutorial or two it 
-should make even more sense 
-so the XSLT is only part of what view 
-find needs to do XML indexing the other 
-part being a properties file for the 
-import tool which tells it not only 
-which XSLT to use but also what custom 
-PHP functions to make available and what 
-values to set for any custom variables 
-that are used within the XSLT so let's 
-look at a s dot properties file that 
-goes with that XSLT i just showed you 
-and all of the import properties files 
-live in the import director and they all 
-contain lots of comments explaining in 
-detail what all of the settings mean but 
-just to go through the highlights of 
-course there's an XSLT setting this 
-tells us which XSLT to use and as I 
-teased earlier you see with ojs you 
-actually have a choice of the regular 
-ojs XSL which will index one a dublin 
-core record at a time or the OJS multi 
-record XSL which can index a grouping of 
-dublin core records all 
-in one file the multi-record is much 
-faster it just requires some extra work 
-when you harvest and I'm going to show 
-you how to use both of these today we'll 
-start one at a time and we'll work our 
-way up to multi-record you also can 
-expose specific PHP functions directly 
-into the XSLT by just creating a list of 
-functions here by default none of the 
-package configurations do this but it is 
-a possibility if you want to make PHP 
-functions available to your XSLT you can 
-also create a class full of custom 
-functions and expose all of them to your 
-X and sub t and most of view finds 
-examples just to use a view find XSLT 
-import view find class full of static 
-functions for exposing custom behavior 
-like that string mapping I showed you in 
-the language import moving on down 
-there's the ability to pass the custom 
-classes to XSLT using their fully 
-qualified names with the namespace but 
-all of you finds configurations truncate 
-off the namespace and just expose the 
-base class name which makes the XSLT a 
-little shorter and more readable so 
-every time I call the view find function 
-I just say you find : : function name 
-instead of having to type you find slash 
-XSLT slashing or slash to be fun 
-so best truncate custom class finally 
-there's a parameter section and this is 
-where you set the values that are 
-exposed as variables to the XSLT so I 
-showed you earlier that the institution 
-and collection fields in the solar index 
-are getting set to variables and the 
-variables are set here so by default 
-you're going to get institution and 
-collection set to ojs 
-so before I can show you any more of the 
-actual importing process we're going to 
-need some records to play with so let me 
-set up the oai-pmh harvesting for 
-expositions 
-I'm going to edit my local harvest oai 
-ini file which we set up on last month's 
-video and just go to the bottom and 
-create a new section I'm going to call 
-it expositions so that when I run the 
-harvest all of my records will go into a 
-directory called expositions under my 
-local harvest directory the URL is HTTP 
-expositions journals villanova you /ai 
-metadata prefix is oai DC because dublin 
-core and now some new settings that i 
-didn't show you last time first of all 
-inject ID equals identifier as i 
-mentioned when we talked about oai-pmh 
-when we harvest using that protocol we 
-get both records and header data if you 
-find needs a unique identifier for 
-everything in index dublin core that we 
-get back from a IP mhm doesn't 
-necessarily have any kind of identifier 
-in it but the oai-pmh headers will 
-always have a unique ID for every record 
-so by setting inject ID equals 
-identifier here we're telling the 
-harvester take the ID from the oai-pmh 
-header create an identifier tag inside 
-the XML that you're going to harvest and 
-save to disk put the ID value in there 
-and this is how the XSLT I showed you 
-earlier was able to pull an ID from the 
-identifier tag and use it in the index 
-so this is a really important feature of 
-view files harvester that enables you to 
-harvest just about anything and reliably 
-be able to index it in solar with a 
-unique ID 
-the IDS that you get back from oai-pmh 
-are often extremely verbose and they 
-would make for ugly and unreadable URLs 
-so we also have some settings called ID 
-search and ID replace which let us use 
-regular expressions to transform the 
-identifiers at the same time that we're 
-injecting them so in the case of ojs the 
-ids have this long prefix oai : o JSP KP 
-dot s fu CA : we don't want to show that 
-to our users so we're going to replace 
-it with expositions - so this way 
-everything that we index from 
-expositions will have a distinctive 
-prefix on the ID so we don't have to 
-worry about expositions records clashing 
-with records from other sources the 
-other thing about this is that the there 
-are several slashes in some of the IDs 
-and slashes in IDs can create problems 
-because slashes have a special meaning 
-in URLs and it requires extra 
-configuration of your webserver to make 
-things behave nicely so let's just get 
-rid of all the slashes as well so we're 
-gonna say ID search bracket bracket 
-equals type slash pipe ID replace 
-bracket bracket equals - so let me 
-explain all of this in whole now that 
-I've typed it all in I do search and ID 
-replace or repeatable settings in the 
-file you can have as many pairs of 
-search and replace as you need to 
-transform your IDs you just have to be 
-sure the brackets on the end of ID 
-search and ID replace so that when the 
-configuration is read the multiple 
-values are processed correctly an ID 
-search as I mentioned is a regular 
-expression it uses the purl style 
-regular expressions supported in PHP and 
-those regular expressions require you to 
-start and end the expression 
-for the pattern you're matching with the 
-same character so in this first example 
-where we're getting rid of the oai ojs 
-prefix i surrounded it with matching /iz 
-because that is a fairly common 
-convention for regular expressions but 
-for the second pair where we want to 
-turn forward slashes into dashes I can 
-surround the forward slash with forward 
-slashes that would confuse the regular 
-expression engine so I just used pipe 
-characters instead so that it has 
-matching characters on the beginning and 
-end of the expression that don't 
-conflict with the internal part I could 
-have chosen a different character here 
-it doesn't really matter but I think I 
-just look pretty so there you go 
-so with all of that in place we're now 
-ready to harvest expositions so now I 
-just need to run of view finds oai-pmh 
-harvests to get the expositions content 
-so I run PHP harvest slash Argus dot PHP 
-and I tell it I want to harvest 
-expositions and now I wait and it pulls 
-down a whole bunch of records 285 
-records one for each record in 
-expositions each of them is an XML file 
-and they are all in my local harvest 
-expositions directory so now we're ready 
-to put all these pieces together we have 
-a directory full of XML files in dublin 
-core format we have an xslt and we have 
-a properties file so there is a 
-command-line tool that comes with you 
-find called import XSL PHP so it's PHP 
-import / import - XSLT HP and this has a 
-nice - - test - only mode that you can 
-use if you want to see what it does 
-without actually writing anything into 
-solar so I'm going to use that for the 
-first run here just to demonstrate 
-what's happened so the first parameter 
-to this command 
-is the name of an XML file so I'm going 
-to choose just one is these files more 
-or less at random 
-so I chose local harvest expositions one 
-five eight eight six eight five one nine 
-two expositions article 2486 that big 
-number at the front is actually just a 
-timestamp the the harvester cuts the 
-time of harvest on every file downloads 
-the second parameter is the name of the 
-properties file I've configured to do 
-the import and I don't need to tell it 
-the path to that file I just need to 
-tell it the file name because like many 
-things in view find what it's going to 
-do is its first going to look in view 
-find local dur slash import to see if we 
-have a local customized properties file 
-if it doesn't find that it's then going 
-to fall back and look in if you find 
-home slash import and use the default 
-one so since I haven't customized 
-anything yet it's just going to to go 
-for the defaults so I'm going to run 
-this command and it outputs a solar 
-document which itted by transforming the 
-input so as you can see like in all 
-fields it's just a whole bunch of text 
-it extracted all the free text from the 
-XML taking the tags off of it there's 
-that hard-coded record format of ojs the 
-ID is that identifier that we injected 
-and as you can see it's prefixed with 
-expositions like we told it to be and 
-the slash that would have been here has 
-become a dash so all my regular 
-expressions worked and here's my 
-University and ojs that came in from 
-those variables that were set in the 
-properties file and a whole bunch of 
-other stuff so let's repeat that command 
-but just take the test only off to 
-actually index it into solar the exit 
-import does not immediately commit 
-changes to solar so if you just run this 
-command and try to search for a record 
-it won't show up instantly the way to 
-ensure that solar is all the way up to 
-date is to run the utility MIT PHP 
-script to send a solar commit so I'm 
-gonna do that just so I can demonstrate 
-that this actually worked so now if I go 
-to my browser I loaded up this search 
-for all records prior to indexing and 
-you can see there there were 250 records 
-at that time but if I repeat the search 
-now there are now 251 and as you can see 
-in the institution facet we have one 
-from my University which was what and 
-from that Oh Jas properties file so if I 
-click on that to filter down here is the 
-nonviolence article that we indexed from 
-the XML so that's really great but we 
-have more than 200 of these records we 
-don't want to have to index them by hand 
-one at a time 
-fortunately there is a script called 
-this slash batch import excess LSH which 
-will take the name of a directory under 
-your local harvest path and the name of 
-a property file and it will loop through 
-and index every single file in that 
-directory using that configuration thus 
-saving you lots and lots of typing and 
-as it does the indexing it also creates 
-a subdirectory of your harvest directory 
-called processed and it moves those 
-files into the process directory so at 
-the end of this process after all two 
-hundred-plus files have been indexed I 
-should have an empty expositions 
-directory with a process subdirectory 
-containing all the hundreds of records 
-that got indexed the batch process is 
-also smart enough that if anything 
-should go 
-during the index it will not move files 
-that failed to import correctly so if I 
-had one bad record in this batch all the 
-good ones would get successfully indexed 
-and moved into the processed directory 
-but the bad one would stay there 
-and I could then for example run that 
-test mode I showed you on the one record 
-to see exactly what the error message is 
-that was preventing the transformation 
-or to see oh there's a missing required 
-field or something to troubleshoot that 
-and fix it the other thing that will be 
-left in the expositions directory is a 
-file called last harvest dot txt which 
-will just contain the date of the last 
-time we ran the oai-pmh harvester which 
-allows incremental updates which I 
-believe I mentioned last time but that 
-means that if I ran the harvest again 
-tomorrow 
-and two new records had been added to 
-ojs it would only harvest those two and 
-then I could index those and I wouldn't 
-have to reenact the other 200 so now the 
-the index process has completed and if I 
-just do a directory listing of local 
-artists expositions you'll see that I'm 
-not lying to you all that's left here is 
-the last harvest text and a processed 
-directory so let's go back over to view 
-finder browser and refresh these results 
-and sure enough here are 285 records 
-they're all searchable they all have 
-links back to ojs to read the full 
-article success but 
-you may have noticed I had to ramble for 
-quite a lot of time while those 200 
-records indexed because indexing things 
-one at a time actually takes quite a 
-while and if you have thousands or tens 
-of thousands of records it's even worse 
-and that is why the multi record 
-function I talked about is really handy 
-so what I'm going to do is I'm going to 
-remove the whole local harvest 
-expositions directory so we can start 
-over and I can show you how much faster 
-this is if we do records and batches 
-instead of wanting to talk so first I'm 
-going to edit my oai harvesting 
-configuration in local harvest I died 
-and I all I need to do is add one more 
-setting at the bottom of this called 
-combined records equals true and what 
-that is going to do is tell the 
-harvester instead of writing one dublin 
-core record into each file you want to 
-create one file for every batch of 
-records that come back over IP MH and 
-you're going to wrap them in a tag 
-called collection if you want to use a 
-different tag name there's another 
-setting you can use for that but for 
-this example just turning on combined 
-records and accepting the default tag of 
-collection is good enough the other 
-thing we need to do is set up the OJS 
-properties file to use the combined XSLT 
-sign so let's copy the default import Oh 
-Jo stop properties into local import 
-because as with everything else files 
-inside local are going to override 
-defaults in the core code and let's edit 
-local import okay I stopped properties 
-I'm just going to comment out a JSX SL 
-and uncomment ojs multi-record XSL 
-so let's just take a quick look at that 
-other XSLT to see what the differences 
-are so I'm gonna edit import / XSL 
-ojs multi-record XSL and so this uses 
-template matching so it's going to match 
-the top-level collection tag and then 
-it's going to loop through the 
-collection looking for a IDC and apply 
-templates to each of them in turn and 
-then there's this Oh a IDC template and 
-this code is quite similar to the single 
-record code it just matches within the 
-scope of a single Oh a IDC instead of 
-globally looking for particular tags and 
-this is really probably a better way to 
-approach all XSLT writing the difference 
-between multi record and single record 
-is that I wrote the single record one 
-when I didn't know what I was doing and 
-somebody else he's better at XSLT than 
-me wrote the multi record one 
-so welcome contributions of multi record 
-import scripts for other metadata 
-formats as well but I do offer the 
-single and multi record options because 
-there are scenarios where each can be 
-useful we'll talk about that a little 
-more momentarily 
-in any case I've now shown you the multi 
-record XSLT I've reconfigured the 
-oai-pmh harvester to harvesting groups 
-and I've configured ojs properties to 
-use the multi record XSLT so everything 
-should be aligned correctly so let's run 
-the oai-pmh harvester again so HP 
-harvest underscore oai dot PHP so 
-harvest expositions and the harvest 
-should take the same amount of time 
-we're still harvesting the same 285 
-records 
-but if I look inside Local Harvest 
-expositions this time there are only 
-three files there because the oai server 
-provided us with three batches of 
-records and each of those got saved to a 
-single file 
-and now if I were to run single file 
-import XSL dot PHP script in test only 
-mode on one of these files you will see 
-that the output is much longer than 
-before 
-because now instead of just having one 
-record transformed to solar we now have 
-a whole collection of records of them to 
-be precise so it goes on and on but the 
-advantage of this is you remember how 
-long it took to batch import the 
-expositions and every file contained the 
-only one record let me show you how much 
-faster it is when there are only three 
-files containing 100 records each 
-harvest slash batch important XSL 
-physicians directory ojs doc figuration 
-file one two three word up 
-so that was a dramatic improvement in 
-performance the only disadvantage to 
-doing things this way that I can see is 
-that as I mentioned the import script 
-will skip files that fail the import so 
-if I had one corrupted record in this 
-ajs instance and I ran this batch import 
-one of these three files would fail and 
-I would know there was a problem with 
-one of the hundred records within that 
-file but it would be hard to figure out 
-which one had caused the problem so 
-doing single record importing may be 
-valuable for troubleshooting purposes if 
-nothing else and I would suggest that if 
-you do a batch import and you run into 
-trouble try doing a single import that 
-will probably help you pinpoint the 
-causes of your problems I should also 
-note that as I said most of the example 
-XSL teas are things I wrote that are 
-designed for a single record at a time 
-there's still some work to be done 
-creating batch import XSL teas for all 
-the format's I showed you ojs because 
-that's one where this work has already 
-been done if anyone needs multi record 
-import for another format that's 
-something I would welcome contributions 
-of so that it could be shared with 
-everyone else using the project and I 
-expect that over time our repertoire 
-will expand and improve so that's it for 
-this month 
-thank you for listening and we'll have 
-more next time 
  
 +//This is an edited version of an automated transcript. Apologies for any errors.//
 ---- struct data ---- ---- struct data ----
 +properties.Page Owner : 
 ---- ----
  
videos/indexing_xml_records.1596807000.txt.gz · Last modified: 2020/08/07 13:30 by demiankatz