Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

In 2009 (after the completion of the pilot project to collect women’s blogs) we began collecting web sites created by individuals and organizations whose papers/records are at the library. 

Currently (2017) the Harvard Library is moving captured and new websites from WAX to Archive-it. Until further notice, if you find a website that needs to be captured, please let Laura Peimer know. More to come!

While conducting a collection survey, the processor will check to see if the person/organization has a website and if so, have we already identified it. If you locate an associated  web site not currently being captured, consult with Amy Benson, the Digital Librarian/Archivist, about harvesting that/those web sites.  From time of first harvest, we must wait three months, per the Office of General Counsel, to make the harvested content publicly available through the WAX public interface.

 

While completing the finding aid, the processor should include a file unit description of the web site, even if it has not yet been harvested.

 

Ideally, by the time the processor has completed the finding aid, a first harvest will have been successful and the web content will have been added to the Schlesinger Library Sites web archiving collection (SL Sites). The Digital Librarian/Archivist will supply Paula Aloisio with a URN for the collection’s archived content, and Paula will add the hot link(s) to the finding aid.  If the URN is not available when the finding aid is complete, the finding aid will still be posted and the URN/link will be added by Paula later.  The processor should, however, include all the information as if the web content were available. 

 

The work flow will take this shape:

 

  • As part of the research on a collection, the processor searches for a web site for the person or organization.
  • If a web site is found, the processor checks WAX Tracker to see if web site is already being captured.

 

  • If a web site exists but isn't in WAX Tracker, the processor:
    • Sends web site URL and contact information for donor (preferably an email address) to Digital Librarian/Archivist (DL/A) to begin capture/harvest of site. For the near future, the DL/A will notify donor that we are going to capture his/her/its web site using WAX. (We have an existing notification letter.) Longer-term, donors will have signed the new donor agreement that includes permission to capture their site via WAX.
    • Discusses scope and frequency of harvest with Digital Librarian/Archivist (domain, sub-domain, domain plus one / monthly, annually, etc.)

 

          EXAMPLE: 4.16 linear ft.(10 file boxes) plus 1 folio folder, 1 folio+ folder, 10 audiocassettes, 2 compact discs, 9 videotapes, electronic records


  • In the Scope and Content: “XX’s web site is being captured periodically as part of Schlesinger Library’s web archiving program.

    EXAMPLE: Also included is Griffin's web site, which is being captured periodically as part of Schlesinger Library's web archiving program.

    See Papers of Susan Griffin finding aid.

  • In file unit descriptions, use “E” as the container followed by a file unit number

           EXAMPLE: E.1. Michelle Obama's web site, 2010-ongoing. [hot link added by Paula]

 

  • In the added entries include "Electronic records" AND "Web sites"

  • The collection, series, and subseries dates should exclude the website date(s), which should only appear in the item description (e.g., E.1. Web site, 2010-ongoing)

 

If you have a website in your finding aid, please mention it to Paula when you give her the XML document for review. She will need to create a "NET holdings" for the bib record. Processors shouldn't worry about this, but should let her know so she can plan to make the necessary record.

  • No labels