It really depends on kind of the vintage of the particular record or the vintage of the 505 field if it was added. In, a long time ago there were limits to the number of, there were practical limits to the number of characters that could be in a particular field. Those limits really kind of don't exist anymore or they're high enough that they're almost never reached. So if the record is old enough, it's very possible that if you would try to input the record manually, with a huge 505 field, the system would hiccup and you'd have to manually split the 505 field into multiple fields so that each of the individual fields would be under the limits. Again, that's not really the case anymore. Nowadays, it's really more common that a cataloger will split a 505 for more logical or bibliographical reasons: the contents of different volumes, of different parts of a multi-part resource, and that kind of thing. So, it really depends on the context of the record and the resource that you're cataloging as to whether you want to split a 505 field or if it's necessary. Sometimes it's easier to make the 505 field legible or understandable or readable if you split it up, but it's not something nowadays that usually gets done automatically.
Not really any longer, as I mentioned before. There used to be limits to the length of the, of individual fields. The limits on both fields and records themselves are now so high that they are for most practical purposes nonexistent. I don't think there's an actual limit on the number of characters in a 505 field or any other field.
I guess logically that would be the case, but MARC21 calls for individual titles in a 505 to be separately subfielded in subfield $t if you are going to use the enhanced 505 practice. But that's just what MARC21 says and we carry that over into Bibliographic Formats and Standards.
If there is a logical bibliographical way to indicate the sequence, such as volume numbers or something like that, that would be the way to go if you're going to have multiple 505s. If there's a bibliographically logical way to identify each of the 505 fields, such as volume number or volume title, something like that, that would be the practice.
Most of the 5xx fields that we've mentioned today that are indexed are in both the notes index, which is "nt:", and in the keyword index, "kw:". There are some of the fields that we've mentioned today that may also be in other indexes as well, and in many cases with other fields that aren't 5xxs or from other ranges in the MARC format. We didn't go into, obviously, we didn't go into much detail, in fact hardly any detail at all, about what specific indexes the fields are in because the presentation was already more than half an hour long and we try to keep it to a controllable length. It would have been even longer if we had gone into that kind of detail. But those details are in Searching WorldCat Indexes.
Not per se. The deduplication process is, as we have had many times talked about, really complicated. And there are some 5xx field elements that are brought into consideration in certain comparisons within the deduplication process. So it's quite possible that a 5xx field or the information in a 5xx field can block a deduplication or, I guess, prove to the DDR process that the records in question are not duplicates or are questionable enough not to merge. I guess you could say that yes, the presence of certain information in some 5xx fields can prevent a DDR transaction.
So, the LBDs themselves are transferring. The information in the LBD does not transfer into the bibliographic record, but the LBDs that are associated with the record when records are transferred remain associated with the retained record.
This question's really a matter of cataloger's judgment in many cases. If you would be creating different 521 fields with, for instance, different first indicators for, let's say, reading grade level, interest, age level, interest, grade level, and so on; obviously with different first indicators you would want to create separate 521 fields, if the information in the 521 field is of a similar type and would be the same first indicator, even if it has a blank or an 8. Generally, I would use separate 521s in a case where there are different rating systems. So MPAA rating in one 521, the Canadian home video rating system in a separate 521 field. But it's really up to you as the cataloger.
Well, in the days of AACR2, the order of notes was supposed to be the order of the instructions in AACR2, more or less, although there was a provision that a cataloger could choose to make a particular note the first note if it deserved some kind of prominence. In the era of RDA, however, there is really no prescribed order of notes. So, it's really up to you. A lot of catalogers, especially those of us of a certain age, will have continued to use the AACR2-ish order of notes, but strictly speaking, there is no order. And some, especially some local systems, it's my understanding, actually rearrange the notes into some kind of order, especially some kind of numerical order, but there is no longer any prescribed order for notes under RDA.
Reformat does not do that, especially considering that there is no longer an "expected order" for notes.
From the response to a previous question, it sounds like LBD information is not being lost with merged transaction, it's just that information, if it was in the bibliographic record, and based on whatever circumstances occur with the deduplication process, it may transfer or it may not.
It sounds like what you're asking about is whether, when your record is incoming and matching to an existing record, and there are 5xx fields that do not transfer during that matching process, that you want those 5xx fields preserved in an LBD even though they weren't there originally. And I don't think any of us on this call have the expertise to answer that.
I tend to think with WorldCat records, you tend to have notes that will be of interest to other catalogers that have that resource. I'm thinking if it's local, to have it in a local note, like a 590 or so, but I would think impacting international or national makes more sense of having that 585 field in a WorldCat record because it will be of interest outside just of your local institution.
Just remember that you're cataloging a bibliographic resource and if the resource itself indicates that materials in the resource have been used as part of an exhibition or are somehow related to an exhibition, and that was mentioned in the resource, that would be enough justification, regardless of the "impact" of that, whether it's national, international, or local, that would be the justification for the note, the 585 note.
That would take some research. It would not surprise me if there were differences among different language of cataloging communities. There are certainly fields that get more heavy use under certain, not just language of cataloging communities but also different descriptive cataloging standards communities. But that would take extensive research to answer, really definitively I think.
The 510 field is, if not at the very top, it is one of the fields that we have put on our list of things to be indexed, so that is definitely under consideration in the future.
My apologies if I misspoke. It is indexed and my notes indicated that.
There's no practical limit to the number of characters that may be in any field, 505, 520--any field. There used to be, but those have more or less been eliminated. Previous OCLC systems would sometimes break up a 505 field that was too long or a 520 field that was too long into multiple fields, but as far as I'm aware, the current implementations of WorldCat do not do that.
One of our colleagues mentioned for the character limit, 9999 to be able to comply with MARC (and export the record).
It's possible that the 506s could have transferred. If you have examples, OCLC number examples, we could take a look at Journal History to see when the 506s were added and possibly where they may have come from.
520, if I remember correctly, is in the notes index and the keyword index. So, note is nt: and keyword is kw:. It's just general, word-by-word, phrases, however you want to go. There are details on, for most pages in BFAS, there's a link back to the Searching WorldCat Indexes for that field, so if you click on that, you will get to Searching WorldCat Indexes and it will give you the details of how that field is indexed and in what indexes it appears.
For most of these, they are in those two indexes, for nt: for note and kw: for keyword.
Strictly speaking, if a resource is being cataloged in English--that is, the 040 $b says "eng," the descriptive information--the notes that you add to the record are supposed to be in English. That's how things are supposed to work.
If the resource is being cataloged in English, that is, the 040 $b is coded as English, the notes and things related to the notes would best be in English. That's the general rule.
You may have a local policy where you want to include notes or summaries in a different language or the language of the item if you want to do that. It's not standard practice to do that. Another thing I have seen sometimes is if the summary note is a quoted note and someone is quoting from the source where it's not in English, you may have it not in English in quotes, within an English language of cataloging record. So that's something that's possible.
All of the notes should follow the language of cataloging. Even though you might not be able to do that to the WorldCat record, you can definitely do that in your local practices in your catalog to serve your patrons.
It is already there, it's in chapter 5.
That is a really great question and unfortunately, I do not know how these fields get decided if it's indexed or not. I know there are some fields that members wish would be indexed but they're not.
A lot of indexing decisions were made a long time ago in consultation with, there used to be a group of users who would consult with people at OCLC about indexing and display issues. We still get recommendations nowadays from groups such as the Online Audiovisual Catalogers, OLAC, from the Music OCLC Users Group, MOUG, about indexing and things related to that. When new fields are added or new subfields are added, as part of the OCLC MARC update, we make decisions about indexing based on what we know about the field and how we expect it to be used, or the subfield, how it fits into the rest of the field and how useful it would be to have it indexed or if it's something that isn't worth indexing. If you have requests for things that aren't indexed that you would like to see indexed, you can send them to us and we will add them to a list of things, sort of a wish list that we have of things that could be indexed in the future.
They can just send those to askqc@oclc.org email address.
I think current practice is to, if you're quoting from a source other than the resource itself, and possibly even depending on where it comes from in the resource itself, current practice would be to cite that source. It is subfield c. So yes, if you do cite the source, put it into subfield c.
When you copy and paste into a bibliographic record, sometimes that can cause problems. There are ways around that, because not all sources from which you would copy conform to the display rules for bibliographic records.
The characters not to include are mentioned in Chapter 2 of Bibliographic Formats and Standards, like vertical bar and smart characters. I'm also thinking off the top of my head, there's certain resources like math books that have a lot of symbols, like pi and that, that can also cause a lot of issues when it's copy and pasted to the record. And there's also the option to paste things in records using plain text rather than doing a Ctrl+C and Ctrl+V to copy and paste, so that sometimes can help as well.
Yes, usually when you validate, it will say something like "bad character" or "invalid character," if there is something there that is not valid. That happens a lot less often now that we validated Unicode within WorldCat than it used to, but it does occasionally happen and it is those characters that are documented as not being usable.
Sometimes those validation error messages are kind of cryptic though, so sometimes they aren't as useful as they could be.
Sometimes validation doesn't report everything that it should, that can happen a lot with authority records and illegal character issues, so just keep that in mind as well.
Those are generally still good practices not to include things like that, in many cases. There will occasionally be cases where you will want to ignore those, but generally those are good practices.
We still see a lot of records in WorldCat that have those words in them, like "Introduction" or "Index" or "Bibliography" that you probably wouldn't normally include if you were putting in your own contents note. Those are often there in eBook records that are machine-generated, so an automated process is generating those contents notes rather than a human being and that's often why you see them.
I don't necessarily know. I don't recall seeing page numbers. I suppose there could be circumstances where it would be appropriate to include those, but I can't think of any offhand. You can probably, if you're able to edit that 505, and you're so inclined, you can probably take those out.
No, it's not different. It's just that bibliography and index notes go into a 504 field, where it says "includes bibliographical references and index," or, if it's just an index, that would usually go in a 500 field, so that's the main reason, I think, because they are distinguished that they are not put into a contents note in the 505 field.
I suspect we're all being silent because we don't know the answer. I don't know the answer.
The answer is "it depends." If indeed this is just another printing of the 2018 book and it just happens to be in paperback and it's the same size, it has the same pagination, et cetera, you're welcome to add that ISBN to the existing record and use that record. If, however, there is a difference with the paperback issuance, meaning that it has a different size or a different pagination or perhaps a different edition statement, maybe even some new foreword or something, then you would want to create a new record.
You can also find out more information in Bibliographic Formats and Standards, the chapter on when to input new records. That will give you an idea of the situation that you have and if a new record is warranted or not.
I would say depends as well. With this case, they could have been transferred through some merges in the past, if everything matched and there was a difference in ISBN, then those would transfer over. Sometimes there are cases where the ISBNs are really for another version of the resource, like a large print or maybe the electronic, and they're not coded correctly, indicating they're valid for that particular description in the record, so it just depends and it will end up having to look at the record and really verify, someone could have put those ISBNs in there and they do their verification and it does have a few ISBNs. Sometimes, not necessarily, it could have just been a transfer transaction, and those are not necessarily always done by a person, so yeah, it would depend.
I would say if you can confirm, you can put them in a subfield z, if you can confirm that ISBN is really for the large print and not for the hardback, then it would be appropriate to indicate it's an invalid ISBN. I don't necessarily think, there's nothing wrong with having ISBNs from other formats if it's the same title, so for the large print and the electronic, if you want to have those ISBNs there, they're okay to be there as long as you indicate they're invalid because they represent another version of the record.
Under both AACR2 and RDA, you're allowed to include in a bibliographic record all of the ISBNs or standard, other kinds of standard numbers that are in that resource, whether they apply to that resource or not. Those that apply to the resource that's being cataloged would properly be in subfield z rather than in subfield a. Only those that apply to the actual resource being cataloged, only those belong in subfield a, in an 020, for instance.
It's not just the 586 now. It's all of them. With punctuation now also being optional--and there was a whole Virtual AskQC Office Hour on this--so a lot of these notes will not be ending in any punctuation. The PCC, the Program for Cooperative Cataloging, has issued guidelines about punctuation and you now have a choice of including ISBD punctuation or not including ISBD punctuation. You'd have to go to the PCC website to find those instructions about what the options are, but generally speaking, most of the 5xx fields, unless they end in an abbreviation or some other term that would naturally have a period after it, the final punctuation is generally now left off of many, many 5xx fields.
What that means is there are different ways a record could be considered a duplicate and merged into a better record. This can happen through our duplicate detection and resolution software (DDR). It can happen if an institution reports the duplicate records and they would be manually reviewed and then merged. So when that transaction takes place, there is a hierarchal table that lists all the different fields and different indicators and sets in the record and makes decision then based on that criteria, does that field transfer or does it not? So certain fields during a merge transaction automatically transfer and some do not. I hope that answers the question.
No, it is not. Actually, Bibliographic Formats and Standards (BFAS), chapter 3, section 3.2.2 covers offprints and detached copies, and you do not use in analytic cataloging, extensions for offprints or detached copies.
No, field 777 shouldn't be used for bound-with situations. Actually, that's better recorded in field 501. Field 777 covers when items are issued together, when even though they're separate, they were actually issued published together.
So that would be, it sounds like local practice. So the items weren't issued together or bound together at publication, it was done after the fact, so you probably wouldn't want to indicate that in the shared WorldCat bibliographic record but that would be considered local information.
Just to add, I agree that's purely local information that maybe would best be handled with notes rather than putting anything into the record in WorldCat.
I do not know enough about that field. I would think it would be modules for all issued at the same time and were somehow together, issued together, then it would be appropriate to use a 777.
It sounds like it is possible. I have never seen a record where that kind of thing has happened, at least that I remember. We see so many different scenarios played out in records, but not something that I've got multiple volumes that is just a set and use of a linking field to kind of say all of these volumes go together. What you do see often in that case is somebody making a decision to catalog the set itself and then, if the individual volumes were scattered around the collection in different classification, make separate records for the individual parts, and that's potentially the situation that could be handled with a 773 field to link the individual parts up to the parent record for the individual set, but not so much a 777.
I am not sure about that. It seems more that the field is used when there is data in an item that's being used to create a different item. I don't know.
I think that you're basically correct. It's not the kind of thing that you would typically use to cite sources that were in a bibliography in some item, to say this is where the information came from. It's not as if 786 cannot be repeated, it can, but the typical situation that I think was envisioned when this field was added to the format was that the resource that you're cataloging don't see that many 786 fields around.
That feature does pull in information from a cited record. I'm not sure how targeted that data is for each specific tag. I do know you do need to have a correct linking tag and correct indicators in order for the feature to work. It would also be recommended that if you were creating a linking field and you wanted the edition information present, then you would just double check the linking field and make sure it was pulled over or you could add it manually. I'm not sure if the feature was built to work on most fields or if there was actually any specific targeting for it, but I know when I use this field, I always double check and make sure everything was pulled in and there's nothing that has to be deleted.
The way that "insert from cited record" works is that for whatever record that you key in the number and pull in the data, it is exactly the same format in all of the fields from 760 to 787. The difference in the way something is cited is based on whether it is a serial or a monograph. It will pull in different information from the other record depending on what it is. You could look at what we do for monographs and say well, that isn't necessarily current for RDA, or it has more information than what you might put in a link in an RDA record, but that's based on "insert from cited record" having been implemented in the system back in the days of AACR2, where you could have monographic titles that were in conflict, so the only way you could identify that would be to include edition statements, the place, publisher, date, so that's why all of that is there when you're citing a monograph and it's not there when you're citing a serial, because serials had unique titles.
The only example that I think of in something like that would be an edition statement that reflects the coverage of an item, so if you had something that was perhaps the United States edition, but you had another publication that was specifically the Ohio edition of that publication.
In the input standards, that is a required subfield. It's of course really helpful in making linking fields actually link if there's an identifying number that can be used to navigate to that other record.
Yes. So, the statements that we made in the slides where it cannot be added to or edited in PCC records, that statement is only if you are not a PCC participant. But PCC participants can edit those in the records.
I don’t know that one way is better than the other. For legal loose-leaf publications, I have seen both 780/785 fields used to link between the different editions and 775 fields with subfield $i linking the editions. Normally you think of 780/785 being used to link between the different iterations of serial titles, those fields show the before/after. With loose-leaf IRs, the editions are a form of before/after so that is a valid reason to use 780/785. On the other hand, they are “editions” and that is what field 775 is used for. I think this is ultimately a matter of cataloger’s judgment.
The purpose was to identify the immediate successor and predecessor so that you could go to Title A, see that it changed to Title B, go to Title B and see the link back to Title A but also the link forward to Title C, and then be able to follow the progression that way. Even though complex notes are allowed in serial titles records, they are more meant for those complexities involving the immediate title change. So, like in that "absorbed by" example that we covered, that was more of an immediate title change complex note. These title changes can get very unwieldy and what is considered a major title change and minor title change has altered over the course of time, depending on the cataloging guidelines used, so there's a bit more complexity in the situation if we included all of the title changes before a particular serial in these linking fields.
There are situations when you could point from a print to an electronic version as a later title. However, in a normal title change situation when you have a print and an electronic for title A, and a print and an electronic for title B, the 780 and 785 fields for the print would only point back and forth from the print title change, and then the same would occur in the electronic version, the 780 and 785 would only point back and forth between the electronic records for title A and title B. Then Title A, print, would be linked to Title A electronic with the 776 field that Robin covered. Title B in the same way would be linked between the print and the electronic using a 776 field, so you would have multiple 7xx fields in this situation. There is a situation when the print ends and is continued by an electronic version that has a title change; in this case, because it is a different version record, you would use the 776 field to link to that later title change instead of using the 785 field. Because it is another version record. And you would note that in the relationship area, so the subfield $i would be "continues online," and then it would point back from the online to the print using the dates that the print spanned.
My guess is these are required if applicable when inputting full level cataloging in WorldCat, for purposes of WorldCat cataloging, and they do in general follow the guidelines put forth by the PCC. The idea is you would want a full level record to include as many of the pieces of information, especially with title changes within WorldCat database, so that it's easier to get from the different titles and see the relationship.
In some cases, we may have looked at a field in the past and said "okay, there are standards outside of WorldCat that treat them differently, and we may want to go above and beyond, in requiring something," but I think in other cases, some of these input standards have been around for many years and they just haven't been reevaluated in light of changes in cataloging. So I think that it would be useful if people sent any fields they were concerned about to askqc@oclc.org and we could reconsider the input standards or at least discuss them and say we decided to be different than the BIBCO standard record for a particular reason. It might be it's required if applicable just because that's the way it was implemented 30 years ago.
The input standards are required no matter what the cataloging rules used are. So basically, it means if it's RDA or AACR2, you would still follow the input standards that are spelled out in Bibliographic Formats and Standards.
Because the map has been extracted from the atlas, you no longer need the directions that are usually available in the 773 to actually go to the atlas to find the map, so it would be better to use the 787 field. If you wanted to create a record for the map and it's still inside the atlas, then you would use field 773, pointing to the exact location within the atlas, where the map is located.
I'm assuming in this case that you're talking about something that was a reprint in its print form and that that has been reproduced to be the digitized version that you see online. In that case, you would treat fields like the dates and 260 or 264 the same as you would have if you were cataloging its print counterpart.
In general, the date when an item is made available online is not a publication date. The date used in a record for an electronic resource which was originally published in print would be the same publication date as the print. As the electronic item is an electronic representation of a print item, the record description should describe the original print item but contain the appropriate electronic fields and coding. This relationship would use 776 fields to link the print and online versions to each other.
Sometimes a publisher will obtain a title then publish or republish it as an electronic resource, the original having been published in print and/or online by the original publisher. In this case, the new publisher will have “removed” the original title page and replaced it with a new title page. In this case, the electronic resource record description should reflect the new publication information and a note could be included in the record referencing the original item, such as a field 534, Original Version Note. A field 775 Other Edition Entry could also be used to link to the record for the original publication.
If the only difference is date, you need to determine if that date is indicating when the resource was made available online or if it is actually a publication date. If you need assistance with a specific resource, you are welcome to ask Metadata Quality staff at askqc@oclc.org, we would be happy to help.
OCLC’s Bibliographic Formats and Standards (BFAS) addresses electronic resources in Chapter 3, Special Cataloging Guidelines. Specifically sections 3.1.1, Provider-Neutral Cataloging: Online Resources and section 3.3.1, Special Types of Publications: Electronic Resources. For field 534, see https://www.oclc.org/bibformats/en/5xx/534.html, and for field 775, see https://www.oclc.org/bibformats/en/7xx/775.html.
It seems like it would probably be 767. This field can be used when the item in the horizontal relationship is the original or another translation. It might be safer to use this field because you're not pointing to what the original language was, you're just pointing to a different language edition.
You do see that in Canadian publications where the government will issue the same text in English and also in French simultaneously, so you can't say that one is necessarily a translation of the other, so if they are truly simultaneous and you don't know that there's any translation involved or what is the original language, you would use 775. You would use 765, 767 in cases where you actually have a translation that you know.
When a print title ceases and it continues as the same title in electronic version, you would use field 776 to link the two, instead of fields 780 and 785.
776 08 $i Continued online: … [on print version record]
776 08 $i Print version, -2019: … [on online version record]
Only use fields 780 and 785 when both a title change and format change exist. For example, the print version ceases along with a title change to the online version. Both the print and online versions records representing the earlier titles would point to each other using field 776, and both records would also link to the later title, online version record using field 785.
Print version record:
776 08 $i Online version: … [link to other format with same title]
785 00 … [link to later title]
Online version record:
776 08 $i Print version: … [link to other format with same title]
785 00 … [link to later title]
Later title, online version record:
780 00 … [link to earlier title, online version record]
780 00 … [link to earlier title, print version record]
In general, when a title changes and a member institution adds a new record representing that title change, the institution inputting the new record is encouraged to add the reciprocal linking field in the record representing the earlier title. If this does not fit with your workflow or you are unable to, you may email a request to bibchange@oclc.org.
In the past I asked how to link print and eBook for older materials when there are more than one print records and e-books. I was told we are allowed to add more than one 7xx fields in the record to point to the different records for the other format. Is this still the case?
Yes, you may use multiple 776 fields in a bibliographic record for point to other formats. You may also use multiple 7xx linking fields in a bibliographic record as appropriate.
7xx cannot link to the known version if it is not recorded in OCLC/LC system because there is no $w record control number?
Subfield $w is “Required if applicable” in 7xx linking fields in bibliographic record. So, if a control number is available, then you should add it to the bibliographic record. However, if there is no control number available, then you do not have to include one in the field. If a record is added at a later date, the control number may be added at that time.
Offprints or detached copies are issued separately but often alongside of the original for either the author or limited distribution. Information about offprints and detached copies along OCLC's policy can be found at BFAS 3.2.2, Offprints and Detached Copies.
However, you may use field 773 to link digital articles to the journal they belong to. BFAS 3.2.1, "In" Analytics states that articles are considered part of this category and a 773 should be used. Note that some types of publications, such as a single issue of a serial may also use 773 but are not considered "In" Analytics in nature.
I think you could still create a linking field for the related resource even if there wasn't a functional number to link it to. So, for example, with the earlier and later titles, you might know the later title but just not know the individual numbers that go along with it. You could enter part of the information into a 785 for the later title, and that would be okay. The idea behind doing it that way would be that later when the control number comes along, you could easily fit it in at that point.
With some of them, we included subfield $i with second indicator 8 to show how you would set it up with second indicator 8. In other examples, I showed it using the default second indicator, to show it would generate that default constant display.
Yes. We took those statements based on information in Bibliographic Formats and Standards, Chapter 5, and that has a title on what fields non-PCC libraries can add or edit in PCC records. If you're interested in seeing the full list, I recommend BFAS, Chapter 5.
Yeah, that is correct. In monographic cataloging, you typically would make an access point for a related work, and as coding has developed in the MARC format in support of the implementation of RDA, it's far easier now to be explicit about what the relationship is, so that that information can be included directly in that access point field. A lot of these linking fields would not necessarily be used in monographic cataloging.
Generally, the fields that can be added directly to PCC records by a non-PCC participants have been limited to the kinds of things like call numbers in additional schemes and subject headings in additional schemes and various kinds of note fields that are pretty unique, such as contents note or even a summary note. The linking fields haven't really been in that category. Not that we couldn't reconsider that, in light of the question, but they just have never been considered to be quite in the same category for people to just add them to PCC records.
When records are merged, with DDR, we're unsure of what the quality is for some of the records. Some of them might be very good quality, both the retained and the duplicate, but with the duplicates, there's no real way to programmatically give this in a real and meaningful way for DDR purposes. So that's why some of these fields are not automatically transferred when those records are merged. Now, for manual transfers, if the person doing the merging just looks at the two records and says "oh, wow, this field really does need to be added to the other one; it would make the record more complete, it adds quality to it," then that cataloger can manually transfer it but it does require someone to manually look at the records when they are merging.
The 7XX fields are not taken in consideration directly in terms of deciding what record to keep and merge. They would be considered in terms of records that we would look at and say, well, they're equal in rank. Let's say that we have two records that are both I-level and the software is looking at the number of fields that are present as well as the number of holdings. In that sense, 7XX fields would be counted and be part of the equation of what record we're going to keep in a case like that. That is sort of a different situation than somebody who's manually looking at two records to merge and evaluating the content of the two records to say, "this one looks better than the other one does." And certainly, you can think of cases in serials where you would look at the linking fields in particular to sort of figure out what is going on. If one record has coverage that’s greater than the other record, it may be that somebody created a duplicate record, added a 785 for what should have been a minor title change, so that explains why this one only runs for a period of ten years where the other record is still open and ongoing, covering fifty years. You look at 7XX fields, the linking fields, but in terms of automation, they don't really get considered in quite the same way.
For field 501, that's a "with" note, so it's used primarily to describe resources as they were originally published, released, issued, or executed, and the 777 "issued with" entry is information about publications which are separately cataloged but that are issued with or included with the target item. And you specifically would not use this field for bound-with notes.
CONSER practice is to not use 760/762, instead relying on 830 to describe the relationship.
I'm sure that monograph practice is to do the same thing and rely on an 8xx series tracing instead. 760/762, within WorldCat nowadays is a pretty rare thing.
That's correct. It should be links to records that are in the same language of cataloging rather than crossing from one language to another. It may be the case that today there is only a German language of cataloging record available, don't put that bibliographic record number in a citation if you are cataloging in English. To cite a number, there really should be an English language record to cite.
There's a hierarchy on which records get retained, and the CONSER record does get retained over any other record in that hierarchy. The linking fields in the CONSER record will never go away, at least as far as a merge is concerned. The CONSER record will always win out.
This would be a local decision and, even then, may vary greatly depending upon the circumstances surrounding each individual audio recording. In general, a 500 note explaining that the material has been previously released in whole or in part is sufficient. In that note, you may include further identification about earlier releases, including the audio format (such as LP or 78), the recording label or publisher, any pertinent title information, dates, publisher number, and so on.
If you look at the PCC Standing Committee on Training (SCT) Training Manual for Applying Relationship Designators in Bibliographic Records, “Guideline 13: Relationship Designators for Resource-to-Resource Relationships” seems to be the only relevant guidance. It does state that “The use of relationship designators for resource-to-resource relationships is encouraged,” but if you go through the still-official Original RDA Toolkit Appendix J (Relationship Designators: Relationships between Works, Expressions, Manifestations, and Items), none of the designators really apply to this situation. You are certainly allowed to “use another concise term to indicate the nature of the relationship” (J.1), but you may alternatively draw the inference that perhaps a linking field isn’t necessary to account for these types of relationships.
My suggestion would be generally to not bother with a linking field in this circumstance. Use field 500 to include the previous manifestation data at the level of detail you believe to be useful and appropriate. If you want to give access to the publisher and publisher number of any earlier manifestations, use field 028. If there is any title information worth giving access to, use field 740.
I'm sure that the theory is "no, it doesn't need to be there; you just need some kind of identifier." On the other hand, we're still at a point where people are dependent on that data in a linking field to actually display a name and title, to see what's going on. I expect that most local systems don't necessarily use the identifier to go grab the information from the related record and supply it in a display. Maybe some do, but I'm thinking probably most don't. We still have this historical practice of including the name/title data in addition to identifiers that we would also put in the link. And it helps in certain situations. I think certainly in the work that we do in maintaining the quality of the data in WorldCat, we've seen instances where the subfield $w had a typo in the number, and we have the name/title for the successor title in the case of a serial that had a major title change, and having that information helps in sort of figuring out what was intended when the identifier leads you to something that's clearly incorrect.
I believe that it not. But I'm not 100% sure on that. It does not appear to be part of the phrase searching; only the keyword search. For example, 780, subfield a, is only searchable by the au: search.
I can agree with that, it is a real nuisance to have to maintain the same information all over the place. Going back to what was mentioned earlier about just including an identifier in a link, it makes sense if we could then interactively just pull in the information from the related record and always have it display whatever is currently in that related record. That would be a good thing. And that's something some of us have been talking about for 20, 30 years. It just hasn't happened. The way I would look at it now is the numbers you would potentially include in subfield $x, the ISSN, the ISBN in a subfield $z, and of course the control numbers in subfield $w, that those are the most important things to maintain, and if the citation gets a little bit out of date, out of step with the related record, that's something we might be able to resolve in the future using all of those identifiers. You think of a future environment where we rely on identifiers more so than is the case today, then we could get information from the related resource and populate a display.
I would typically use field 775 for these situations where you have the same resource in two languages, when they're issued at the same time so that it would be difficult to say that the English is the translation from the French, or the French is the translation from the English, thinking more that 765/767 would be used, but that is another case where, in monographic cataloging, you would more likely do an access point.
Yes, that is the final version.
Unfortunately, no, that is not an automated process. That wouldn't happen. Ideally, whoever was working on the record would see that and hop over and manually fix the record or report it to Bibchange so we can fix the record, and I know myself and others in Metadata Quality, if we're working on, say, a print record and we see that 776, we will pop over to the electronic record and make sure the links are correct going back and forth between the records.
The Member Merge Project is a program where we train our member institutions to merge duplicate records. It is going very well. We have 53 institutions that are participating and we are stating up another round right now, we just actually reached out to four more institutions that are going to be joining the program, but if you're interested and you are a participant of PCC, please send a message to askqc@oclc.org.
There is a webinar coming up on Record Manager in a few days, and then I located a comparison chart that shows the differences between our cataloging applications that might be useful.
A longstanding practice from years ago would be for our indexing to look at the coding of source at the tail end of the 008 field, and based on that, if it was coded as blank or even coded as c, we assumed that it was either the Library of Congress or a Library Congress Cooperative Cataloging Program. But definitions in MARC are broader than that nowadays, where blank would be used by any national library, and c could be used for any cooperative program, so source isn't coded in quite the same way across WorldCat as you would have seen in the past. But we haven't updated our indexing to reflect the kind of changes that you see, so records that come from the German National Library, they have source blank, and consequently we're marking things as LC, and particularly in the case of source c, we're marking things as Library of Congress cataloging that are not. We do have a JIRA ticket open so we can take care of that, it's not implemented yet.
I would recommend going to OCLC Support and they will direct you to the OCLC staff that could answer your question. And that's support@oclc.org.
I don't know if this will help, but in a lot of the searching that I do where I want to weed out of the results Digital Gateway records, I would put in "not AC=DC," and that would cause them to fall out of the search results.
Only one field (either field 130 or 245) is chosen as the title used in subfield $t in the linking field. If a record has a 130 field that field would be used because it has the differentiating information needed to distinguish the title in the 245 field from others with the same main title.
Therefore, the thinking is that there would be no need to include the title in field 245.
Libraries do report that kind of thing to Metadata Quality and we will go ahead and closeout records. A lot of CONSER participants will spot these same kinds of needed changes in their own work. So, it really is a combination of both. We take the requests as they come, either a CONSER library will do it ahead of us, or we do it when it's reported.
There is currently a pilot going on about adding URIs in PCC records, both for NACO and BIBCO records. Once that pilot reaches its conclusions, they will publish their recommendations and open up entry of those subfields to the rest of the PCC. So, yes, you will start seeing more now, subfield $0 and subfield $1, in the appropriate places. Look for best practices to come.
The LC Secretariat said that they are willing to allow more members now. See the PCC website for information on how to apply.
Yes, the MARC Organization Codes are used in the 040 for authority records.
There are membership slots that open up each year. Generally, it is the chair of the standing committees who recommends, or seeks out, new members. The PCC year starts in correspondence with the federal fiscal year, so the terms run from October through September. October 1st is when new members join. If you are interested in becoming a member of one of the standing committees, we would suggest getting in touch with one of the chairs of a standing committee and letting them know. Occasionally there are also calls for volunteers on the PCC List, and you are welcome to volunteer when there is a call.
We have several different emails, depending on the purpose. Cataloging questions go to askqc@oclc.org. Requests to correct bibliographic records go to bibchange@oclc.org. If you are not a NACO member, requests for correcting or creating NACO records can be sent to authfile@oclc.org.
You can report those kinds of links to Metadata Quality and we will take them out. The intention is that the CONSER record would only contain general links to an online serial publication, and not institution specific links.
PCC catalogers make mistakes, just like everyone else. If you find mistakes in records, do feel free to report them. We suspect you may find mistakes in DLC records too, and those are also a part of PCC records. If you do find them and want us to correct them, we'll be glad to do that. It shouldn't be taking very long for them to be corrected. We usually turn around our requests to the Bibchange inbox within several days to maybe a week. If you are not seeing that correction being made, perhaps we didn't receive your request, so you might send it again. Also due to the volume of requests we receive, we don't respond to each request. We want to take the time to address the request, and if we had to respond to every request that was sent in, you could see how that would cut into the time we can spend making correction. Don't forget that you can make many, many changes to PCC record, which are listed in Bibliographic Formats and Standards chapter 5. There may be changes you can make that you may not be aware of. If you would like for us to notify you because you are waiting on that record to be corrected before you can use it, please just add that to the email or the request and we'll try to do that.
In Discovery, if Search Expansion is configured, a user should be able to enter 4XX terms and have that term as well as any authorized headings returned as part of their search results. Institutions who want to use Search Expansion will need to enable this functionality in Service Configuration as well as selecting the authority files they want to be used. Should an institution need any help, they can contact their Customer Service area.
Search https://worldcat.org/config/apps and navigate to WorldCat Discovery and WorldCat Local>Search Settings>Search Expansion Settings.
Long standing practice for that kind of information, whether in field 264 or 260, would be not to include a period after a closing bracket at the end of the field.
Sinopia is a cataloging interface where you can catalog resources using linked data. There is a website you can go to read about Sinopia and the efforts that are involved with the Sinopia cataloging interface.
Send a message to bibchange@oclc.org so that we can take a look and investigate to see why your holdings may have been attached to this incorrect record. If there is a problem with matching, we can address that. If you have a Data sync project, we can get you in touch with the Database Specialist for that project to see how we might be able to improve the matching, if it's matching incorrectly.
Hovering over those MARC institution codes in an authority record in Connexion and seeing what library it refers to is on our wish list for a future enhancement. It would be a nice feature. However, it is not something that is feasible in the near term. Also, at this time, there are currently no plans to add a hover-over feature for the institution symbol in Record Manager, neither for bibliographic records nor for authority records.
Your OCLC symbol is used in bibliographic records, your MARC organization code is used in the 040 field of authority records. In the past, OCLC sometimes did carry MARC organization codes in the 040 in bibliographic records, but currently we convert those to OCLC symbols.
If you are unsure if a note should be removed from a record, you are always welcome to send those to bibchange@oclc.org and we will look into it. Or you can use the error reporting function through Connexion or Record Manager. Our April session for Virtual AskQC Office Hours is going to be Local Data in WorldCat Records on Tuesday, April 13 at 9 AM Eastern and Thursday, April 22 at 4 PM Eastern.
WorldCat, as we refer to it, is the bibliographic database that you search. OCLC is the organization to which all five of us report and to which many of you listening have institutions that belong to OCLC. We've tried to limit references to the bibliographic database to WorldCat, that is to refer to the bibliographic database as WorldCat. Although for decades, people have been colloquially referring to all of OCLC databases as OCLC. Our preference is to refer to the bibliographic database as WorldCat. Some people think of WorldCat as the Discovery database or the WorldCat.org database, which is openly discoverable or searchable on the web. They refer to WorldCat as something different than what is used in cataloging, but the database is the same database no matter what interface or service it is used as part of.
Leave it intact as the spelling with the letter 'u'. We serve libraries in other countries outside of the U.S. that will spell "color" differently than we do in the United States. In our own work, we tend to leave the two spellings intact as found on records. Unless you were adding that information and you were keying it in for the first time because it wasn't there, then I would use the spelling that you are familiar with. Otherwise, leave the spelling intact because it is not necessarily incorrect.
We have an internal database called Journal History where we can view previous versions of the bibliographic record. That is not something that we can, at this time, make available externally. If you have a question that you think can only be solved by viewing the history of the record, send that query to either askqc@oclc.org or bibchange@oclc.org and we can help you with figuring out what's going on with a particular record.
If WMS is your local system and there is something you want edited in a PCC record, please send it to bibchange@oclc.org. We will make the edit if it seems like that is the feasible thing to do or discuss it with you further if there are questions.
It turns out that our processing for Google resources and HathiTrust, occasionally an incorrect record is cloned to represent an item online. If you can tell that has happened, you can either report the record to bibchange@oclc.org and we can make adjustments to it to reflect what is there in that one link, as long as it looks like that record really just has that one link and is supposed to represent what is at that URL. If it has picked up additional links, then the record may be somewhat more confused (one of the links refers to one version of the resource, while the other links refer to a different version of the resource), that kind of thing should be reported to us in that case. Ordinarily, if it's just the one 856 field for a record that was derived, that has symbol OCLCE in the 040 field, you can make adjustments to that record yourself and also correct the link in that 776 field to point to the correct version of that same item in print.
For many years there had been a policy in the NACO file to not necessarily include every uniform/preferred title for works in field 130. Instead, it could just exist on the bibliographic record itself. That is why, when you look at the CONSER file, you'll see so many 130 fields that are there to differentiate similar titles, the same title, for two different publications. So, yes, the CONSER record itself would be the authoritative source for the 130 that you were going to use as a subject heading on another record.
Yes, essentially that is the case. Because of the limitations on editing PCC records- editing, adding, or otherwise changing a field in a PCC record, you may find that if you have made an error in a field, it cannot be edited or corrected because that field now already exists after the record has been replaced. So, it is much preferable to replace a PCC record in a single replace transaction.
No, individuals may not join PCC, it must be institutions. If you are from a small library and only have one or two people that are interested in participating, your institution can still join PCC. There are lots of funnels within NACO that allow smaller institutions to join and not have to contribute large amounts of records.
You may see older records that have encoding level 4 because when the PCC started, level 4 was the code that PCC libraries used for BIBCO records. That hasn't been true for quite a few years now, but older records do still have that code and you'll still see them within WorldCat. There are still some other libraries that use code 4. They may or may not be PCC records, but probably are not if they are using it currently. As for level 7, if the Library of Congress is using it, when they create new records they automatically add 'pcc' in the 042 field for almost all their cataloging. There is not a problem with encoding level 7, which is minimal level with PCC records, as long as that is what it is accurately representing in the record. Any PCC records with encoding level 'blank' ought to adhere to the BSR (BIBCO Standard Record) or CSR (CONSER Standard Record).
It seems that in the case of BIBCO records, a combination of encoding level 7 and 'pcc' 042 would be a little more unusual and would not pass our validation. Encoding level 7 in combination with 'pcc' might be something that you see more often in a CONSER serial record. The thinking is that 'pcc' indicates that the access points are under authority control and encoding level 7 is indicating how complete the description is, and those are two different things. So, it is possible to have this combination in CONSER. Although typically, if somebody is doing full authority work so that they could add code 'pcc', they usually do a more complete description. So, the combination of encoding level 'blank' and 'pcc' would be far more common.
The combination of encoding level 7 and 'pcc' in BIBCO records fails validation because in monograph records we are not supposed to see that combination. But, as mentioned, it has been seen on some records. A search in the database revealed that for monographs, we have more than 9,000 maybe closer to 10,000 records that fall into that category. They look like they are just errors that have come from the Library of Congress. There isn't a situation where we have converted any encoding level M to be encoding level 7. So, they were received by us, presumably, as encoding level 7. That being the case, we should probably take the pcc code out of those, or at least look at them to determine whether the encoding level is an error and they really should be full.
We will investigate further to see what is in the Library of Congress' catalog versus what we have, in case something did change on our side along the way, determine what the issue is and take care of it.
No. We have identified this issue and will be doing something about them in the coming weeks.
Yes, just like the case with field 130, if a CONSER record has no 130 because the title was not in conflict, then that information from field 245, the title proper in subfields $a, $n, and $p, would be considered authorized to use for an access point for that serial in another record.
No, it is not creating a problem. It's best practice to do a replace once, but if you do notice an error or a typo that you need to fix after you replace a record the first time and need to replace the record a second time, that's fine.
There is some work currently going on to update portions of the CONSER cataloging manual. We are not sure when those revised sections will be available.
Sinopia is a cataloging interface where you can catalog linked data and recommend going to the Sinopia website for more information. Sinopia website: https://sinopia.io/. Standing Committee on Training, Sinopia Training: https://www.loc.gov/catworkshop/Sinopia-Training/index.html.
The moratorium has been lifted as of March 1, 2021. If you are interested in applying for PCC membership, you may do so.
We suggest contacting the Secretariat at the Library of Congress. Because of the moratorium that was mentioned earlier, there were not any training sessions for the last year that were given in person or via online. It is possible that some could be planned for the future now that the moratorium has been lifted. There is a lot of training that has been recorded and is on the website.
This is the issue related to the coding of the Source (Srce) element in the fixed field, where records are either coded with Srce 'blank' or coded with Srce 'c', and they display as if they are LC when they are not. That is something that we are working to resolve. It is an outstanding issue that has been reported and is in our backlog to work on, but we are unsure when we will be able to get to it.
Answer (from participants in chat): That is the correct order of 5XX notes and should not be reformatted. Information about note order can be found in Bibliographic Formats and Standards (BFAS) at https://www.oclc.org/bibformats/en/5xx.html.
https://www.oclc.org/bibformats/en/quality.html#requestingchangestorecords shows all the different ways you can report issues with records to Metadata Quality staff.
If you’re unsure if a field is too local and should not belong in a WorldCat record, but you don't want to take it off feel free to send them to us. If you are able to edit the WorldCat record to remove what is clearly local data, by all means, feel free to do that, but we would appreciate if you could send us a note about it in whatever way is convenient for you, whichever way you normally report errors. Because the odds are, if you're finding a local note on a record, it's going to be on multiple records and that will give us the opportunity to look at it and hunt down any additional records and correct them in bulk. There might be an opportunity for us to reach out to the institution that contributed those notes for educational purposes. So, by all means, feel free to edit the record if you're able to. But we would also appreciate being notified, so we could follow up if additional actions need to be taken.
Certainly, you're going to find records that have local information. It may not be coded as local information, but when you look at it you know it's local information. Please do report those to us. It may be that you have something that is unique, for example an artist book. Something original, maybe that you’re cataloging for your collection, you want to make it as descriptive as possible for your users. And you keep your records in WorldCat. I'm not sure of the philosophy behind it. I can see plenty of reasons why you would want to, but always keeping in mind that if you're sharing this information to WorldCat other institutions are going to see it. It’s going to show up in their records as well or in in the main record in WorldCat. There are lots of different opinions about cataloging rare materials, what's important to somebody who deals with rare materials may not be the case for somebody who works in a more general way, and was just looking at the item as they were adding it to the collection and views it as just an old book, or may look at it and say this is really valuable it needs to be described in very specific detail. A lot of the details that you see in rare book cataloging are really interesting to others that are dealing with the same kinds of materials. They’re trying to differentiate whether they have another copy of exactly the same thing or differences in printings of materials are going to be of interest to different users. We try to accommodate everybody, as much as we can, that means that we lean on the side of allowing more information in records that represent rare materials with the idea that if you don't need it - for somebody that is using that same record for copy cataloging - they can edit it out locally, perhaps, but it is very much a different philosophy for rare materials versus everything else.
WorldCat Discovery release notes, March 2021 If you don't want donor information in a particular field to be shown to your users, you can omit that from being shown.
WorldShare Analytics Office Hours
I'm going to pass this along to the analytics team and have them reach out to you, if you want to send us your email address or email ask, we'll make sure you get in contact with the right person.
That would be for a note or added entry where, say, somebody famous had donated an item to a specific institution. Say they was a noteworthy person, and this is a rare book or something, and they pass it onto your institution, you want to make note of that in the bibliographic record. It's rare, it's unlikely anybody's going to have this same item. You have the note, you have the subfield $5. Say somebody looks at this record and they're looking for this item and they see your resource and it has a 500 note with the subfield $5 that says it's signed by Abraham Lincoln, That's kind of noteworthy. It's probably a little bit of interest outside of your local institution and that's just information that's made available to everybody else, even though it is only specific to your resource.
Yes, in the case of provider-neutral cataloging, there was a need to indicate preservation information that would be specific to a single institution. So rather than have 2 records for an electronic resource, when a library has been involved in some digital preservation program, the decision was to go ahead and include that information in a single provider neutral record that would otherwise be used for cataloging of any instance of that resource online preservation information, though is not necessarily of interest to everybody, so those preservation fields that are used, one of them would be marked with subfield $5, to say that it is really specific to the particular instance that is been used for digital preservation.
A print book was released in 2014.
Publisher in eBook matches publisher in the 2014 print record.
Pagination in eBook matches 2014 print record.
Option A: 264 _1 2014. Single date in Fixed fields, 2014, ---- (ignore the 2021 date)
Option B: 264 _4 $c c2014 ; 264 _2 2021. Single date in Fixed field, 2014, ----
Option C: 264 _4 $c 2014 ; 264 _2 2021. Reprint date in Fixed field, 2021, 2014
other options?
Recall 264 2nd indicators:
264 _0 Production.
264 _1 Publication.
264 _2 Distribution.
264 _3 Manufacture.
264 _4 Copyright notice date.
With that criteria, the description of the electronic resource should match the description of the print resource. And it's going to be the additional electronic fields that bring out the dates. Or any information that's electronic. And that doesn't necessarily include dates because different providers make that resource available online in different years. Say, 1 provider has a contractor or an agreement to provide this title for 5 years and then they drop it from their collection. Somebody else picks it up and makes it available in their collection. Then you'd have to go into the bibliographic record and change dates. The actual date is the date the item was originally published. And then the electronic information is added to make an electronic record. Under Provider-neutral cataloging guidelines, you're taking the publication date from the title page that you see, that would normally correspond to the print. There isn't the same level of interest in when the item was digitized and placed online, partly because the 1 record is going to stand for all instances of that same resource as found online. They were probably put online by different providers at different points in time, so it's that original date of publication that is there on the title page. Part of it is does this conflict with what is in AACR2 or RDA and the answer is absolutely it is not in line with either those standards. But it's what is required for provider neutral. To make sure that this was accounted for some providers changed the title page date, but nothing else. How does that play into this? Providers can play all sorts of horrible tricks on us. It may be that you would end up looking at the print record realizing it's the same thing. And particularly if you're dealing with a provider that has a history of changing bibliographic information, or not necessarily presenting everything that you would expect to see. They digitize a book, but they don't give you that original title page. It depends on what is available to you as you are cataloging. A record that you might base on one instance, available from one provider might be altered when it's available from a 2nd provider and an original title page can be seen. You have to take that into account as a cataloger. It's not like you can do endless research on some item when you're cataloging it, you pretty much have to take what you see. If you suspect that you can't see the original title page, or that kind of thing, described from the title page that you have it's possible to include information in a 588 to say what you have based the description on and it's possible to base the description of the electronic version on the print item itself.
We view those as essentially equivalent. I know that a lot of libraries look at that and look at the MARC definition and say, well, they don't mean exactly the same thing. But in the context of WorldCat, they really do essentially mean the same thing. There isn't a particular preference, but we do change them to be a 2nd indicator “4” rather than a 2nd indicator “7” with a subfield 2 local, in part because we transfer data into records based on scheme, etc. identified by the 2nd indicator and the subfield $2. To our system it looks like 655 with the 2nd indicator “4” in a different scheme than a 2nd indicator “7” with a subfield $2 that says local when in fact they're actually the same. So, if we make them the same, then we don't have the same level of duplication that we would have otherwise. It's possible to call up a record and see the very same term in a 655 with a 2nd indicator “4” and also as a 655 with 2nd indicator “7” and a subfield $2. We try to avoid that as much as possible.
Yes, you are able to export the records through Record Manager. The help-de.oclc.org is a great resource on how to use query collections: About query collections in Collection Manager
We do deal with removing 856 fields that are specific to an institution, particularly when there's a more general one that is available instead. It's much easier to do for commercial providers where we have a general URL available, and we'll try to transform the one that's local into the general one. and then, if it turns out it's a duplicate field, it will drop out of the record. And we do that using macros to clean these kinds of things up, but it only deals with a portion of the problem. There was a period of time where we had lots of 856 fields transferring more than is the case now. So that's where a lot of these things have come from the past. If you see any one specific institution where this happens a lot or a URL from a particular provider where this has happened a lot go ahead and contact us. Because then we can put some effort into dealing with that particular problem and get it out of the way as much as possible.
Absolutely shoot us a message and let us know, and we will definitely take a look at it. If you are concerned and want feedback, just say I found this, there's possibly more, could you let me know? We'll definitely take care of it, respond back to you, and say yes thanks for reporting this, there were 50 more we've taken care of. These often have subfield $5s and we still want to know because not everybody uses the $5 as it was intended. So, we would want to review that.
Not yet, but future functionality will include this. That's still not scheduled, but in the future, you'll be able to do that. It's planned work, we have the requirements for that, we just don't have a timeline when that will be released.
We're very careful about what we do with Library Archives Canada records because of their use of, and in particular the need to accommodate some elements that are needed for their unique catalog and the situation with accommodating records into language is French and English. We like to have these reported, we do remove things when it's appropriate to remove them and I'm speaking generally, but we do take care, we are rather conservative in our edits just like we're conservative with our merging. That may leave more data, or in this case, in an emerging case, duplicate records within WorldCat, but that's because we're erring on the side of caution and feel that the duplicate may be more acceptable in some cases than removing it and losing that information. And then Rich comments that most of the community has, he believes, has a hands-off approach to LAC data and WorldCat. So, we're definitely very, very respectful of that.
I would agree because the record is intended to stand for all instances of that same resource as available online, particularly when it's available from more than one provider that not is specific to one institution, perhaps it shouldn't be needed.
I do know with the tool kit having a translator version in Spanish, you can find more of the terminology, the cultural vocabulary for the RDA terms in Spanish. You can get more information in there. The macro and the work that we're doing adds Spanish 33X fields to the record when appropriate. We automatically add Spanish 33X fields
Hayley's example of course, was in the context of an LHR but in bibliographic records, the expectation is that the 33X fields would be in the same language of cataloging as the rest of the description. Occasionally, we'll come across a record that is marked as cataloging in Spanish, but the terms and 336, 337, 338 are in English. We'll go ahead and convert those whenever we can, and then mark the language code and the subfield $2. So, you'll see something like RDA carrier to indicate Spanish.
RDA Registry has such controlled terms in multiple languages.
Bibliographic Formats and Standards: all of the fields are listed in there, also the online help pages. If you go to oclc.org, and look under support or help, you will find local Holdings information, local data information. The help site could help, do a keyword search - it's really nice and it has categorized listings of all the fields. Plus, you'll have the resources in this video to look back on as well.
We'll look into this.
Thank you very much question. We have limited expertise in this area. Metadata Quality team doesn't work quite a bit with this, but we had some internal chat as the session was going on. First to share, which I've now put into the chat to everybody is a link to my understanding my file reports. And if this ultimately doesn't help answer this particular question, then we suggest reaching out to Support@oclc.org. This is the support system, and they will absolutely be able to direct you into the right place or anyone who has questions regarding these cross ref, exception reports for this.
My Files reports Please reach out to OCLC Support.
If the group has some sort of Discovery package, and they want to share that particular note, then an LBD seems to be the more appropriate place, rather than an LHR.
Yes, in both Record Manager and Collection Manager. It's a little easier to do it in Collection Manager as a query collection. If you want to learn how to do that check out this link.
Then the other part of the question is, should we decide to use our institution public interface even if you're describing to Discovery? That's a great question and a lot of institutions do need to deal with that particular situation. I know that there's reasons, in some cases to use your public interface and then there's reason to use Discovery. Of course, us being on the call here, we would love for you to use WorldCat Discovery interface and there are quite a few benefits to doing that as well. But the first part of that question is a really good local question to have with not only the catalog within your institution but also with people who work with the public services and users, which may be one and the same. It's definitely a local decision for that.
You also have a comment about access music, libraries having local notes for the music records for these. There are providers around that do not have any kind of general URL that would take you to something about the item. If it's metadata or in the case of something like Ebooks that drop you at a title page that they might let you see. In those cases where that's entirely all local, we would certainly like to reduce the number. The problem that we have when there's a general URL when there is one additional that is institution specific is really a problem of clutter in these records, where you have the same domain name over and over and over, and most of those are not going to work for anybody. If there was only one URL for Naxos on one of those records and it was local it's still not going to work for anybody. Our general approach to that in the past was, we're okay with perhaps removing the other times. We've looked at it and said, well, one is better than none at all. So it's something that we may need to discuss about what we want to do with these kinds of things across the board because if it's entirely something that is unavailable to everybody else in WorldCat, that sort of raises the question of why have it.
What is described in the question is a link to a local Book Fund Plate, or table contents, that's password protected. Those are local institution-specific type links, it sounds like. So, no, they should not be in the WorldCat bibliographic.
I’m used to seeing them in the 024 field. To be honest, I have not yet come across it in an 856. But I would tend to see them more in an 024 field and we actually recently had an example in Bibliographic Format and Standards showing it in field 024, even though we don't see them a whole lot within the 856. It seems to me though, because of the nature of being the link, a potential link to an item itself, those would go on the WorldCat record.
https://www.oclc.org/bibformats/en/8xx/856.html has an element about DOI
If something doesn't validate please notify us. We see validation updates often and usually need to update both the validation rules and the User interface to account for the change.
If there are invalid values in subfield $2 perhaps they are just codes, like from an earlier time period, as opposed to a more specific code based on some terminology that's coming from the registry because those codes have developed over time and our validation is a match to development as well. But I suppose that if a particular one has a subfield $2 that just says when the expected term for that 38X field is from some specific list within the RDA registry that probably it would fail. Let us know about any one example, and that's one of those cases where we can go look for more of them and potentially get them changed to a valid code.
Our group does not work directly with Discovery but here is a link to documentation: https://help-de.oclc.org/Discovery_and_Reference/WorldCat_Discovery/Display_local_data. We encourage you to write to OCLC Support and they will help you.
Record Manager inter-weaves it. Collection manager has the 2 options
Cleaning up the records when errors or issues like this are reported to bibchange@oclc.org, we review that and if it looks like something that has been added to multiple records well, if it's something local, we'll go ahead and fix that record. Then we usually go on and search for additional records and target those that require a log in, and which anybody from an institution can log in with the log in credentials. Of course, we always want to be careful about any that we end up deleting or transforming in some other way. But, if we get a report, and it's a case where one record was reported, but we have another 5,000 that have the same kind of issue we'll try to add logic to macros that we use to possibly transform a URL to be a generic one for that provider, as opposed to an institution-specific one. Periodically we have gone back through Ebook records to deal with it. There was a time period where we did a lot of field transfer that wasn't really intended, and we picked up a lot of institution-specific URLs that we needed to get rid of. And we could probably do that over and over and over again to help clean them up. If you see a problem like that, where the same kind of URL that is institution-specific is across a whole set of records let us know and we'll try to get rid of it.
These local notes appear in the modernized view of Discovery in the item details. They can appear before or after the WorldCat notes. Libraries can use their WorldShare sign in information: https://www.oclc.org/community/discovery/modernization.en.html. This is a link to a specific program that talks about this configuration option.
Yes, that can be reported to bibchange@oclc.org. And if it's more of a general question, it's not a specific record, but you have more of a general question about local information, then send to bibchange. We would ask, though, if it is local information from another institution and, you are confident, you can delete it and it makes the record better. But maybe shoot us an email anyway and let us know so we can look and make sure there aren't additional records with the same issue.
That’s a very good question but outside of the expertise of this group so we need to have you contact OCLC Support and they will get the right group to assist you.
They will generally sort by the type of subject heading so that you can see the entire set of headings that were assigned according to a particular scheme grouped together and I realize that for some libraries this is a little more problematic, because it may be in a bilingual setting. You're trying to duplicate headings in English also in Spanish, and you want to see them, sort of paired up. But, eventually, when other processes get to those records, we'll sort them by indicator with just a regular reformat in Connexion. You may be able to input them, paired up, but they won't necessarily stay that way. If you are using a local system, and you're exporting the WorldCat record, the export will maintain the order that they're in. When the record first arrives, it's no doubt in the order that it was when it was sent to us, but once it's in the database and subject to other processing that we do, stuff can get sorted around. Discovery doesn't necessarily follow the MARC tags, the subjects may end up in a different order anyway, regardless of what order you see within Record Manager.
Yes, if you are a OCLC member and Connexion subscriber, you are able to access Record Manager for free. To do this, go to WorldShare Record Manger Ordering (oc.lc/getrm) and fill in the form to request access.
If you need to alter a statement of responsibility in a WorldCat record, in general, this can only be accomplished by editing the record manually or by reporting the correction to bibchange@oclc.org.
In certain circumstances, it's possible that the record will be updated through a record replace process depending on how the project is profiled. If it's set to 'replace own' records and another library has not modified the record, i.e. no library symbols in field 040 subfield $d, the record will be replaced with the modified version that was sent as long as the record matches.
If the statement of responsibility is correctly subfielded (i.e. $c), it should not affect matching when the record is matched via a batch process. The presence/absence of a statement of responsibility alone does not affect matching.
For English language of cataloging, headings should be controlled to the Library of Congress Name Authority File (LC NAF) and the Library of Congress Subject Headings (LCSH). The $0 will disappear when the headings are controlled but when the records are exported, the headings will include the URIs.
The presence of the 667 field alone will not prevent the heading from being controlled with automated controlling. To prevent the heading from being controlled, it needs to have special coding in the Fixed Fields. For examples, headings coded as undifferentiated names will not be controlled via automated controlling.
We are aware of the benefits of adding the option to search these vocabularies from Connexion and Record Manager. It is on Metadata Quality’s list of requested files to be added but currently OCLC has no plans to add these.
No, not at this time, but we'll consider it. You are welcome to request enhancements using the OCLC Community Center’s Enhancement Suggestions and narrowing the topic to Collection Manager (https://www.oclc.org/community/enhancements.collection_manager.en.html). Many of the enhancements that have been developed in the past, have come from these member requested enhancements. If you do not already have access to the OCLC’s Community Center and have a cataloging subscription, you will need to request access at oc.lc/getrm.
All Program for Cooperative Cataloging (PCC) level authorizations and roles come with the capability to edit and add records to the LC/NACO authority file. If you don’t already have a PCC level authorization or role, then you are welcome to apply to join the PCC. PCC trains libraries and when a library participates, that library will be given a PCC level authorization or role. You can request training at the PCC website (https://www.loc.gov/aba/pcc/). If you are currently a NACO library and need to adjust your authorizations that you already have, email orders@oclc.org or fill in the webform (https://www.oclc.org/en/cataloging-subscription/ordering.html).
For more information on the PCC’s NACO, BIBCO, and CONSER programs, see the Program for Cooperative Cataloging website (https://www.loc.gov/aba/pcc/).
For more information on authorization levels and roles within Record Manager, see BFAS 5.2.1, General Guidelines (https://www.oclc.org/bibformats/en/quality.html#generalguidelines).
If you have further questions, please email us at askqc@oclc.org.
We currently have a development ticket to add the $0 as an option when exporting NTA records. In general, it depends on the settings when exporting records.
There are currently no plans for adding the HTTP URIs for other vocabularies, although we can see that it would be useful. You should be able to see the subfields $0 with the URI code/number related to that, so while not a direct use of an HTTP URI, if you were using software like MarcEdit, you could append that part to the number and it should get you the same results as a URI.
Unqualified names will not be controlled automatically since they require a choice from the user. A cataloger must verify that that is the authority that applies to this name.
When controlling descriptive heading, the language of cataloging has to match. When controlling subject headings, the language of cataloging does not have to match.
While FAST is not searchable using the cataloging interfaces, FAST headings can be searched in searchFAST (http://fast.oclc.org/searchfast/) and added manually to WorldCat records. OCLC also automatically adds them to records based on Library of Congress Subject Headings (LCSH) that appear in the WorldCat records.
Not at this time, however, you are welcome to request this as an enhancement using the OCLC Community Center’s Enhancement Suggestions and narrowing the topic to Collection Manager (https://www.oclc.org/community/enhancements.collection_manager.en.html).
Other than the retrospective that Nathan discussed in the presentation, there are no plans to do this on a regular basis for headings that can control automatically. However, generally, when a bibliographic record is replaced or updated, the offline control heading service will be called to control all controllable headings.
We have an open issue for this in the hope that this will be an option in the future, but right now we are waiting on development time and resources to be able to install this. We would love to be able to see this happen, because it would go a long way in helping with the quality of the controlled headings.
No, we currently don't have that. You can always report cases to AuthFile either using the form or email (authfile@oclc.org).
No. Once you have joined PCC and participated in the training, then the PCC will send you information on how to contact OCLC to set up cataloging authorizations.
If Leader/14, Leader/15, and Leader/16 are all "b" then it would prevent automated controlling because it wouldn’t be valid for use. Otherwise, we check 008/09 but you shouldn’t attempt to use this code to prevent automated controlling. Instead the best way is to use Leader/14-Leader/16.
There are a variety of reasons, but primarily it depends on how busy our system is in updating the bibliographic records after a change to an authority file records. We are working on ways to improve the process. If something has not changed after a week, it’s possible that will not be changed. Please report these to authfile@oclc.org to see if there is a problem we can look into.
Value there is going to be the code that’s related to the text label in the subfield $a. If you went to a source like the Library of Congress Linked Data Service (https://id.loc.gov/), all of the headings are associated with a URI, either as a URI code/number or an HTTP URI which includes the https://id.loc.gov/. The way to add this in a MARC record is to add these manually. We recommend not doing this manually but using some sort of automation such as controlling headings. There are other services such as MarcEdit that allow you to add URIs to your records. The value of the subfield $0 is the URI code which is coming from a source such as https://id.loc.gov/.
There are a variety of reasons. We get records from thousands of sources. Some catalogers will work directly within the WorldCat systems Record manager or Connexion to do their cataloging but a lot of catalogers work within their local Institution’s ILS and they send us the records. We have several processes to match and merge and clean up the records as they come in. Anything automated is never going to be 100% good. The other issue with some of these records is a matter of perspective. We get a lot of vendor records for items that have not been published yet but are in the works. These, for the most part, aren’t very useful to catalogers and can create a lot of clutter when searching for other materials. The title might not be correct, you might have an all caps situation, and some of the fields might be wrong. While not useful for catalogers, we’ve has several conversations with a variety of libraries that those records are useful to acquisition staff who need to attach an order record for the pre-publication work because they’ve already planned on ordering it, they are just waiting for it to come out. So, it’s a matter of fit for purpose on who the user is. Records that are not so great for cataloging, might be just fine for another user. That’s one of the ways of working within an aggregated system like WorldCat, you get the benefits of catalogers updating records and enhancing records on one end of the spectrum. Then on the other end, you get added noise as a cataloger as we’re trying to help out these other situations like acquisition or ordering specialists.
As long as everything is going as it should, it takes 2-3 days depending on when it’s submitted. If it's updated today, then it should get to the Library of Congress (LC) tomorrow. LC would then send it back that night and it would return the next day.
We would advocate that you do your cataloging in either Connexion or Record Manager and then export the records to your local system. That would allow you to improve and enrich records in WorldCat while you are doing your cataloging.
Depending on your local system’s requirements, you could possibly set up the TCP/IP connection which would connect directly to your system and add the record (including any changes made to it) instead of taking the extra steps of exporting the records and then importing them into your local system.
For information on exporting within Record Manager, see https://help-de.oclc.org/Metadata_Services/WorldShare_Record_Manager/Bibliographic_records/Export_MARC_21_records
For information on exporting within Connexion Client, see https://help-de.oclc.org/Metadata_Services/Connexion/ConnexionClient/Connexion_client_documentation
For information on exporting within Connexion Browser, see https://help-de.oclc.org/Metadata_Servi..._documentation
David Whitehair posted in OCLC-CAT in January, that an early adopter field testing was planned for May/June and release was tentatively planned for July/August. On June 18, 2021, we are hosting a Cataloging Community Session where David Whitehair will be giving an update on what is included in Connexion 3.0.
The data set within WorldCat is exactly the same whether you search within Connexion (current or new version) or Record Manager. There is different functionality between the Connexion and Record Manager but the set of data remains the same.
In order to control headings, OCLC has to host the file or a copy of the data set within our systems. At this time OCLC does not have these authorities loaded into our system. While we would love to do this, we don’t know when this will happen. Metadata Quality has requested this in the past. If you would like to let OCLC know that an authority file, such as LCMPT or LCDGT, is important to control, please go to the Cataloging Community Center (https://www.oclc.org/community/cataloging-metadata.en.html) and request that the authority file be added. The more these are requested by our members, the higher the priority assigned to that request.
Adam Schiff put in a request for LCDGT and LCMPT in the Cataloging Community Center during this webinar: https://www.oclc.org/community/discussions/cataloging-metadata.topic.html/provide_access_inconnexionandrecordmanagerto-8yLt.en.html.
The Canadiana database is comprised of two different files, the Canadiana Subject headings, which are in English, and the Canadiana Name Authority File, which are in French. While the Canadiana Subject headings may be used on any records, no matter the language of cataloging, only records cataloged using French language of cataloging may apply and control the Canadiana Name Authority File.
OCLC provides access to MeSH using Record Manager but currently there are no plans to provide access to MeSH using Connexion.
This is something Metadata Quality would like to do but currently is further down the list of things to implement. One of the things that happens as part of controlling is that headings are maintained when the authority file heading changes. When that happens in FAST right now, we have to go through the effort and change all of the bibliographic records in a different way than is easily done with controlling headings.
This is another example of an authority file to request from the Cataloging Community Center (https://www.oclc.org/community/cataloging-metadata.en.html).
We have considered that but did not implement this. Subfield $0 has been around longer than subfield $1. In terms of linked data, subfield $1 is pretty important. Subfield $0 can contain an URI or an authority record control number, so we were already on the path of using subfield $0, even though we didn’t display the authority control number in the case of LC controlled headings. We looked at the possibility of outputting either subfield $0 or subfield $1, although we haven’t quite gotten there through Collection Manager so that you can get what you need for a local catalog, while at the same time maintaining our internal mechanism that was already built that makes use of an authority record control number in subfield $0. That could then end up being the basis of what it is that we output.
This is on our wish list. We do not have a date for this but Metadata Quality has been asking for this enhancement.
Regardless of the interface that you use to access WorldCat, you will see updates. As the changes are made, you will see them applied to WorldCat as a whole. So, whether you are working in Connexion or Record Manager, you will see those updates. If you get updates through Collection Manager, you will see the updates there as well.
Yes, if the heading in the bibliographic record matches a reference in an authority file record, the control heading service will flip the heading to match the form found in field 150 of the authority file. It is our hope that as we go through the records in WorldCat, retrospectively, that we will pick up headings that weren’t controlled for some reason in the past and get them controlled to the authorized form.
At the moment the only way to insert a FAST heading into Connexion or Record Manager is to copy and paste the heading into the bibliographic record. You can also look into using assignFAST (http://experimental.worldcat.org/fast/assignfast/) which may be helpful when adding headings.
If you can reproduce it regularly and it seems to be a system problem, then you can report it to askqc@oclc.org and we can look into the situation and if needed forward the problem onto the correct group of people to make sure it not a functionality issue on our end.
We currently do not have this capability. You are welcome to add this request to the enhancements in the Cataloging Community Center (https://www.oclc.org/community/cataloging-metadata.en.html).
All enhancement requests may be submitted using the general Cataloging Community Center (https://www.oclc.org/community/cataloging-metadata.en.html). You also have the ability to upvote an enhancement request that has already been submitted by another member library.
The best way to see the set of enhancements that have been requested is to utilize the Cataloging Community Center site (https://www.oclc.org/community/cataloging-metadata.en.html).
This would be the case depending on your setting for receiving record updates. That is one of the reasons we are giving plenty of notice before starting this project, so that libraries have time to make any needed changes to their settings so the effects on their workflow is minimized. We are planning on starting at the very highest OCN and working our way down to the lowest and expect this to take about 3 months to complete.
Not at this time. To do this, we would need a request for this in the Cataloging Community Center (https://www.oclc.org/community/cataloging-metadata.en.html).
This would be difficult to do as we protect the controlled headings from tag changes because of the various rules it would take to make sure the tag is correct. So, while it’s possible, it is not likely at this time.
Anyone with a full cataloging subscription has access to both Connexion and Record Manager, so which one you use depends on your institution’s policy. They each have different functionality. Connexion allows more bulk editing and customization, while Record Manager has other functionality like enrichment and controlling to more authority files. Note that Connexion Client is only available for PC users and requires that you download the software onto your computer, while Record Manager is a web based tool with no software to download.
We expect this macro to be fixed with Connexion 3.0.
This is a good question to send to OCLC Supportn. They can help you troubleshoot with Edge.
We don’t know for sure as everyone’s set up is different. We encourage you to attend the Cataloging Community Session on June 18, 2021 for an update on Connexion 3.0.
This bug was fixed earlier this week so should not be a problem anymore.
The Library of Congress (LC) distributes most of the records they catalog and we receive weekly distributions from them; however, they do not distribute everything in their catalogs. If you are finding a record in LC's catalog, that does not necessarily mean we have received it, so if you are wanting to know about a specific record, please send a question to bibchange@oclc.org.
We are hoping to have the documentation finished and posted later this summer to the OCLC Community Center and the OCLC Help website.
The language of cataloging is taken into consideration in DataSync processing and deduping WorldCat. For example, we want to match an English-language cataloged record to an English-language cataloged record and a German-language cataloged record to a German-language cataloged record. But not match an English-cataloged record to a German-cataloged record. Beyond that, there really is no difference in the processing, we compare the various elements of the records, in the same way, regardless of the language of cataloging. Taking field 300 as an example, we're normally looking at things like the coding of the record to determine format and then the extent in terms of the numbers that are present, rather than actually comparing the terms, as different terms are used in different languages to indicate the same thing.
Right now, we are in the process of doing some testing to incorporate number matching back into DataSync matching. We do not have a definite date or timeline when that will be installed, but it may be as soon as in the next month or so.
As records are merged the OCLC record numbers are added to the 019 field in numerical order, regardless of when a particular merge took place in relation to any other OCLC record numbers in the 019.
The documentation will cover DataSync as well as many other processes. For example, Collection Manager and Duplicate Detection and Resolution (DDR) are two of the wide array of other processes that will be covered in the new documentation.
This comparison between manual and automated processes to add new records is not something that is currently tracked; however, the vast majority of new records added are contributed by automated batch processes.
The process looks at the 040 $b for the language of cataloging code only in determining the language of cataloging of a record. If a record were a hybrid where the 040 $b code did not match the language of cataloging in the description of the record, then there would be the possibility that the record could be merged into a record with the same language of cataloging code and fields in the different language could end up getting transferred.
With the updates coming to the DataSync matching that was mentioned earlier, the match rates for records coming in via DataSync should improve. This is not necessarily to make DataSync and DDR matching algorithms more uniform, but it will improve the match rates in DataSync processing.
Not necessarily, if no information was transferred during a merge, then the symbols present in the 040 will not transfer to the retained record's 040, only the OCLC record number will be added to the 019 field. If data does transfer during the merge, then the symbols will be added to the 040 unless already present.
If you are not a PCC library, so that you're not able to change those headings to their correct form on your own, I would say, go ahead and report them and also note that there are others in the database. We have been doing a little bit of work with juvenile subject headings of late and noticing cases where we have juvenile subject headings that include subdivisions, juvenile fiction or juvenile literature that should not have those subdivisions, as well as LCSH headings as well that are related and need to be adjusted, so go ahead and report those errors.
No, we don't because the MARC format doesn't have that we do not have that same kind of information internally. The only record of a change to an existing bibliographic record would be the addition of a symbol to the 040 field. But once you have a lengthy 040 field with many $d's, and a lot of fields that have been added or changed in the record, there's no way to match up who did what. We do have a history of the record so we can look back at certain points in time and compare before images and after images to tell what happened, but not in the sense that the question was asked in terms of, can I tell by looking at a field who added it for example.
We do have an internal history tool that will allow us to look at records prior to a change transaction as long as it happened after April 2012.
Unfortunately, I don't think that we have anything in our data prep information processing that I explained. We do a lot of clean-up, but I don't think that's something that our processing can handle. Definitely bring these to our attention so we can work with the database specialists for that particular project. We can communicate back to the institution, so they can fix future records and there is a possibility fields could be excluded from future processing if it's something they are not able to fix.
I can say that it does figure into some decisions in part. The record retention hierarchy that we have in place starts off by looking at things like codes and field 042 and the source of the record such as the Library of Congress versus an OCLC member. But once 2 records are in the same rank, then we consider the number of fields that are present and the number of holdings in combination and so a record that appears to be more complete because it has more access points and field 700, could possibly win out in that comparison.
All of it.
Yes, that something that we can correct for you. If you send a message to bibchange@oclc.org, with the affected record numbers if you have them, we can make the change for you if you would prefer that a particular symbol as associated with a record. That's something that we have to do, it's not something that our users are able to change in a record.
No, we are not including that; however, you should be able to find information on the codes we use and the purpose of those codes in Bibliographic Formats and Standards (BFAS) at https://www.oclc.org/bibformats/en/quality.html. This might be good information to include, so I'll bring that back to the team that's working on the documentation.
If you could send a message to bibchange@oclc.org with some examples, we're happy to look into it and see if those fields are appropriate for the record or not. Then if they're not, of course, we will be happy to remove them.
We have what we refer to as sparse record checks, so we do have criteria that records need to meet in order to be added to WorldCat. So, the records that you refer to are obviously meeting those the criteria. I don't know if you have any specific examples that you're asking about, again, we're happy to take a look to see if there are some issues, but we do try to set criteria to make sure that we're getting records that have enough metadata so they can be searched, retrieved and be discoverable.
You should be able to see the edits you make using Connexion client in the WorldCat.org display of the record after replacing the record then refreshing the webpage. Additionally, there can be information from external sources displayed in the WorldCat.org record display that does not exist in the WorldCat bibliographic record.
We don't have an exact timeframe yet. The testing is happening right now with the field test. I would predict in the next few months, but I don't think we have a specific timeframe yet.
That's actually one of those things that's on our list to clean up periodically. So, it looks like we need to put that on our list to clean up sooner rather than later. We do try to take care of those and either merge them into duplicate records or delete them.
If our Searching WorldCat Indexes document is correct and if I'm reading it correctly, the 655 is indexed in the subject [su] index and in the keyword [kw] index, in addition to the specific indexes that take into consideration the 2nd indicator. So, a 655_4 is indexed and should be searchable in both the subject and keyword indexes.
We have several different flows that occur with our DDR duplicate detection program so, how a record is added to WorldCat affects when it will go through DDR processing. For example, records added via DataSync generally get into the DDR queue within several days and processed within 48 hours. For a record added or modified online, it takes 7 days to enter the DDR queue.
The message of the day was decommissioned in March so a message of the day is no longer being posted. To elaborate, this is related to the new client version, which is in field-testing right now, where there will no longer be a message of the day.
Except for the opening screen, which looks a little bit different, it looks very much the same as it always has. The changes are largely, although not exclusively underneath the hood, if you will, behind the scenes. The idea was to upgrade the client to be better compatible with more modern and up to date Windows software and so on. So, it won't look a lot different, but in theory, it will be behaving better. Additionally, some things such as glimmer clustering and institution records that are talked about in various places in the connection client, those references have been removed. If I remember correctly, the help system has been moved to the OCLC website. When you access help, you'll be going to the OCLC website rather than within Connexion itself.
Record Manager enhancement requests can be made through the Cataloging & Metadata Community Center or send a message to askqc@oclc.org
Data from the LC/NACO authority file is sent each week and the VIAF database is updated weekly as well. Depending if a change or addition was made early in the week, you might see it in VIAF the same week, but if you did it later in the week the change might not be reflected in the VIAF database until the following week.
The input standards of mandatory and optional and so on were originally set long ago by an advisory group of catalogers from member libraries and were partially based on the Library of Congress national-level demographic record. Fields such as the 505 contents note, of course, are appropriate in many cases and are extremely useful as you point out, such as for sound recording song albums, collections of stories, anthologies, and so on. But the 505 is also not found in many things, such as a fictional novel, for example. Input standards aren't meant to indicate if a field is worthwhile or not worthwhile, they are meant to be kind of floors, not ceilings, so if any optional field is appropriate and useful in a particular instance, then yes, include them.
We should have some info on this soon — stay tuned!
GLIMR was an experiment that OCLC undertook some years ago to cluster bibliographic records together. It evolved into the clustering you now see in WorldCat.org.
Some of us did do some testing prior to the field test and all of our personal macros that we use were made available. I don't use text strings, but I believe both the text strings and macros will transfer over to the new version.
Sometimes it gets it right, sometimes it doesn't. VIAF clustering is determined by algorithms, which are not static, they are always being worked on and refined. So, if you see any VIAF clusters that are incorrect, please report them to bibchange@oclc.org. We manually edit the clusters as needed and can work with the VIAF team to see if there's a bigger underlying issue when indicated.
It will be a completely new download.
We get many requests for editing the VIAF clusters, so we apologize for not getting those completed as quickly as we would want to. Definitely keep reporting them, we try to work on them as much as we can.
We're just working on the conversion of encoding level K records, that's the first encoding level we decided to try to take care of. I just checked today; we still have about 16 million records to go. We have a little while yet to work on those. I think we were talking about moving on to encoding level M, but that's a huge number and we might break that down somehow or another, but I think that may be next.
At this point in time, I do not believe there's any discussion on that being developed for VIAF.
We actually are still having discussions on how we're going to evaluate incoming records and how to code them. We haven't decided yet so unfortunately, I have nothing more concrete to share.
Yes, if they pay for a cataloging subscription.
VIAF stands for Virtual International Authority File.
Institutions do not need to worry about precomposed characters in bibliographic records as OCLC uses UTF-8. For authority records, institutions should make sure that characters are decomposed since precomposed characters cause authority records to be stuck in distribution. Copying and pasting such characters from the web should be avoided and instead use OCLC's diacritics and special character set when entering diacritics in authority records.
Also, BabelStone: Unicode: What Unicode character is this? is an online tool that that can help identify if it's Pre composed or decomposed. Copy and paste a character into this tool and see the Unicode characters that that are being used.
And a good tool for diacritics is Joel Hahn's Macros for the Connexion Client: CvtDiacritics.
No, they are not. For the authority records, there's no validation, but it also doesn't automatically change them.
Normalization does take place for searching functionality and that extends to precomposed versus decomposed characters.
Correspondence is one of several special subdivisions that often have multiple possible subdivisions (Field codes) so you can have Correspondence as a subfield $t or you could have Correspondence as a subfield $v and with the goal of trying to be helpful, the controlling software proposes both options and lets the user choose which option is correct rather than automatically controlling to the subfield code that was entered into the record.
The fast headings will change when the LC term changes. We did consider diverging from LC for certain terms but it complicates processing to such an extent that we decided not to do that for the time being with the expectation that many of the offensive and outdated terms will change in the next few years.
The only way I know of to ensure these are paired correctly is to look at them manually and pair them manually. Our system does try to pair things up, but it doesn't always get it quite correct. But you can edit that. So, if you're working in Connexion, you can pair and repair Fields as needed to make sure that the pairings are correct. Now, not all Fields do need to be paired if there is a vernacular Field without a transliterated Field then you wouldn't pair that with anything.
“Validated” meaning controllable, then there's a list available:
https://help-de.oclc.org/Metadata_Services/Authority_records/Authorities_Format_and_indexes/Get_started/40Available_authority_files
It's preferred to just go ahead and delete them all that way, they can be regenerated in about a month or so. It's certainly not required that you delete them all, but it is preferred.
If you're not comfortable with using them, you can remove them locally from your system, so they're not showing up in your local copy of the record.
If programming your local system to not display the FAST headings while leaving them in the records is an option for you with your own ILS, that’s certainly a good to consider for FAST or any sort of Fields that you don’t want to display to the public. That way, if you do decide 10 years down the road that you really wanted to have those but you've deleted them, you won't have them, but if you've kept them and just the suppressed display, you will have them for any future uses you want to make of them.
Lots of people have considered it, but it would be up to the Library of Congress to make the decision to modify their authority records to allow that.
We do not have specific guidelines in place; both are allowed in bibliographic records. We prefer decomposed because they work better with the Connexion macro. Also, if you picked something up from the bibliographic record and wanted to create an authority record, since the authority record can't accept precomposed characters it's better for the bibliographic record to be decomposed. And so, yes, it's better to use the OCLC tool for adding the decomposed characters or adding diacritics using the tools that are available in the cataloging interface.
No, they will not. There's no validation check in the systems in either Connexion or Record Manager that will cause a validation error when a record has precomposed characters. That the best option would be to utilize the Connexion and Record Manager functionality in order to enter those diacritics. So, in Record Manager, there is a function to insert a diacritic and then in Connexion under the Edit menu, you can enter diacritics that way.
Other tools:
From the quick look, I took just now Islam--Relations--Hinduism, and Hinduism—Relations--Islam appear to be validation records. Since there's no separately established subdivision Hinduism or Islam that would be controllable as subfield $x then those subfields are uncontrolled. So, only the Islam—Relations--Fiction are controllable in that in those headings and Hinduism—Relations--Fiction. Consequently, the $xs remain uncontrolled while the rest of the subfields are controlled.
Currently, our turnaround time is within a couple of days. Incorrect merges do take a higher priority, but keep in mind that, depending on the complexity of the merge it could take a little while to get them teased apart, but usually within a couple of days.
It is optional. There is no requirement for you to enter transliterations, or to include the non-Latin characters. But if you are cataloguing according to certain standards you may want to follow whatever it says in those particular standards.
I know there is some thinking going on about what will happen with Bibframe, and the idea has been floated that there will be fewer Fields that would routinely be transliterated within Bibframe. “Fields” might even be the wrong term in Bibframe, it would be elements within the Bibframe structure.
There are a lot of libraries thinking about this, and what they want to do going forward. The reason of course, that transliterations were entered in such a large scale initially, when people were doing online cataloging in MARC is because the vast majority of local systems did not support non-Latin characters. The only way to enter data into your local systems, or to WorldCat was using Latin characters and transliteration. But that's changed, since OCLC supports all of Unicode now. There are a lot of options.
It does not take into consideration the detailed date, and we would recommend that if there's a way to differentiate different versions of a similar document that you supply an edition statement if there isn't one that you can use. So, what you're doing is a good practice.
The next version of Connexion Client is in Field Test right now and these are some of the issues that they are working on getting resolved. So, hopefully, that will come out later on this year and it will solve most of these problems if not all of them.
I assume that they would be for Bib records that's what OCLC has control over. If you have concerns about the use of the precomposed versus decomposed diacritics in the authority file LC is the place to talk to about that for the LC/NACO Authority File. Because it's their system that requires the use of decomposed diacritics.
They cannot be searched directly in Connexion but PCC maintains lists of participants on their NACO website: https://www.loc.gov/marc/organizations/
Also, you can search the MARC Organization Code in MARC Code Lists for Organizations
Then there is help here on searching OCLC Cataloging Products: https://help-de.oclc.org/Metadata_Services/Authority_records/Authorities_Format_and_indexes/Indexes_and_index ed_Fields/0Indexes_and_indexed_Fields_A_to_Z
This question is better sent to OCLC Support. You can also check out the Collection Manager/Knowledge Base Virtual Office Hours and ask them there.
This is probably about the Data Sync matching, and yes, there are other libraries, like the National Library of Medicine and the Library of Congress, that would fall into the national library category, or the Program for Cooperative Cataloging projects, so yes, there are others than just the obvious national libraries that come to mind.
We're actually running DDR continuously, pretty much around the clock, and we have several different queues that records get into DDR. This has come up in previous sessions, but for example, if significant changes are made to a record by a cataloger, those records would get into the DDR queue in a week. New records that come in through Data Sync generally get into the queue within several days. But we do run DDR 24 hours a day.
I'm assuming this is with the subject headings where you have a subject heading "Cats" with a subfield $v with "Juvenile materials." In that particular scheme, if that second indicator 1 value is not present in the retained record, then when the records are merged, those specific subject headings would transfer then to the retained record.
Anytime you suspect that records have been incorrectly merged, you are welcome to submit those to bibchange@oclc.org and we would be happy to look into that to see if maybe those ISBNs did transfer during a possible incorrect merge and then undo it if appropriate.
When you see "Match," that means that was the only action taken on the record. "Create" obviously would be a new record. "Field transfer" would be when data did transfer to the WorldCat record. We do have some documentation that we've been working on that will explain a little bit more in detail about how matching works, along with many other things such as field transfer. There is some information on Data Sync matching already available that may help answer the question (oc.lc/data-sync-processing). There are conditions that go into matching. We obviously do a lot of comparison points from the incoming record to the record in WorldCat. It's all based on the field transfer rules that are outlined in Chapter 5.
If we're trying to match a record that is coming in from Data Sync, and the database record has errors, that can be enough to make them not match--if they're even considered in the first place. We retrieve records and then compare the fields. If it's a case where we have a publisher that's missing in 264 $b, versus the other record has one, that may not make it not match completely but it's taken in consideration. We have situations like that, comparing a published version of an item versus an unpublished version. The quality of the records that are already in the database can affect the matching of incoming records.
We're still working on making sure everything will work correctly when we run that controlling and enrichment retrospective. We're working on final details before we test the first set of records. It's not clear when the full runs will start, but hopefully soon.
Yes, that is correct. If there are also LC Subject Headings or subject headings from another scheme already present in the retained record, those from the deleted record will not transfer.
These are the same rules that apply when records are manually merged.
Basically yes. Even though there are a lot of limits on fields that will transfer to a CONSER record, a non-CONSER library couldn't make a change manually online that would cause their symbol to show up in the 040. So when you see a $v with a non-CONSER symbol in a CONSER record, after the point the record had become authenticated in WorldCat, the only way that it should end up there would be if it was a field transfer situation.
We don't actually validate fields at the point where they transfer. That would be an additional step. It's just never happened that way, and yeah it's kind of bad to take a retained record that passes validation and then add a field to it that should improve the record but now it doesn't pass validation because it's got something that was incorrectly coded. For the kinds of fields that we routinely transfer--call numbers and subject headings and contents notes, that kind of thing--we usually try to clean them up as much as possible, but obviously we don't catch everything and we let a lot of records into WorldCat based on the value of being able to share libraries' holdings and that kind of thing, and look the other way on some validation errors. But we try to go after them as much as possible. Maybe we should further discuss the point of validating fields at the point of transfer.
You're also welcome to, if you come across that, welcome to send those to bibchange@oclc.org, to see if we're able to figure out what's wrong with them or figure out a way to fix them.
Please report any such incorrect merges (bibchange@oclc.org) so that we may unmerge them if necessary. It could be that there was incorrect coding in either of the records that would cause them to be merged. We do come across a lot of audiobooks that are on Books format rather than Sound Recordings, where they should be. That is a problem and we do fix a lot of those, but it is possible things get by us in those cases. It's normally miscoding that causes that to happen because our merge software is looking at the distinctions, not just "oh, they have the same title and publisher and date in common." It's really looking at the format to say "this is an audiobook, this is printed text," or "here is the online version," to keep all of those things separate. So when they do get merged, it's got to be the result of some incorrect coding.
No, I don't believe that we go into that kind of detail in that documentation--for the link that I sent--but I would say the reason that something wouldn't transfer to would be similar to what I outlined as the overarching reasons why we don't transfer some fields. Any specific question about why something doesn't transfer, or does transfer, can be sent to AskQC@oclc.org.
Authority questions can be sent to authfile@oclc.org. And if there's a question about whether or not they're duplicates, or if there might be something wrong with the record, you can go ahead and send those to that address. If it's a question that would better be answered by AskQC, it will be forwarded from Authfile to AskQC.
Neither 041 nor 546 fields, both having to do with language, play a direct part in matching, or in record resolution. In most cases, especially when titles differ from language to language, which is most cases, you won’t necessarily need a language edition statement.
But in cases where the title of a resource may be the same from language to language, that is, for instance, if the title happens to be a person’s proper name, a biography or something, where that wouldn’t change from language to language, it may be useful to include the language edition statement to help differentiate from one language of the resource to another language of the resource.
If other bibliographic element, such as the publisher, the pagination, things like that, if those are different, it’s probably less important to add a language edition statement. But if the title is the same from language to language, it may be useful.
DtSt t is when you have a date of publication and a date of copyright. That gets used a lot nowadays, because even when you have a copyright date and a date of publication that are the same, those, because they are different RDA elements, they are coded separately, so you could in some cases have either an actual publication date or an inferred publication date in date 1, and the same date, representing the date of copyright, as date 2, so that’s the case.
That wouldn’t have happened under earlier instructions, but does happen quite often under RDA. It is possible that a publication date and a copyright date could be either the same, in which case you would use DtSt t, or different, in which case you could use DtSt t as well. And just to remember, under RDA, a copyright date cannot be used by itself as a publication date. It can be used only as an inferred publication date, if you had no other evidence of publication.
Yes. You may use a range of dates to search those indexes.
It is kind of general, and isn’t necessarily helpful unless it is something that the publisher emphasizes that this is meant for adults as opposed to children. For instance, if there’s an adult edition and a child’s edition, or something like that, you may want to include the target audience as adult for the adult version. But in many cases, because target audience is optional, you don’t need to include that it’s for adults.
This coding is confusing, because it has changed over the years. The current practice is to not code for government publication if the publisher is a state university press, and this is stated in Bibliographic Formats and Standards, in the fixed field government publication page, under academic publications, where we say, “Publications of state college or state university presses in the United States (e.g., Kent State University Press, Michigan State College Press) are not considered to be government publications.”
There is actually an exception to that. The exception would be things like college catalogs or directories, or things like that. So those are actual publications of the state.
Yes.
You are welcome to code those elements. We did not cover those elements today—we will be covering them in the 2022 presentation on the fixed field elements that are not used in all the formats. But certainly, you are most welcome to code them, and if you see records in WorldCat that don’t have these codes and ought to, please go ahead and add them to the records and replace them.
Yes. This would be the same thing as adding a encoding level K record in the past—encoding level 7 means minimal. Now, if you have a really incomplete record, if you only have title, author, and imprint, and you don’t include the paging, you would have what we call an abbreviated record, and that would be encoding level 3, and you are welcome to add those as well.
If you send a file of records to our data sync with the date entered being a future date, data sync will reject those as being invalid. Any past dates are accepted, but if you sent something that was coded, for example for 2023, in the data, our system would reject it.
Nowadays, under RDA, you would use s as a DtSt code much less frequently than you used to. Unpublished items that have a single date of execution would have a DtSt s. If you’re cataloging something older that may not have a copyright date, for instance, or that has dates that you have to infer a date of publication, that doesn’t state it outright, it’s possible you could use a DtSt code s. But it’s much less frequently used now than it used to be under AACR2.
This is referring, I believe, to the alphabetic codes, I and K in particular, the OCLC-defined codes. I is equivalent to blank, and K is equivalent to 7 in the official MARC code list.
We don’t have a projected date as to when they will become invalid. We are currently working on converting the level K records to the official MARC codes in WorldCat. We have a long way to go, though. We’re going to have to convert all the Is, and then we’re going to have to convert all the Ms, so we aren’t sure yet.
We will give plenty of notice before we do this. Meanwhile, the official MARC codes are available now, for everyone to use, when they are entering records, so if you want to switch to using blank or 7, or whatever your coding requires for the record you’re entering, please feel free to go ahead and do that now, to get used to using those codes.
No. If this is one single non-Latin language, that’s the language code that ought to go in the fixed field. If it is multiple languages, you would put mul—if it’s a collection, you might have multiple languages—and then you code the languages in the 041.
The zxx, the code for no linguistic content, is for, in the case of sounds or sound recordings, that would be used when the music is entirely instrumental, and has no words associated with it. If there are, for instance, accompanying materials that need to be coded—program notes, things like that—that would be coded in the field 041. But when the main content has no language content, such as purely instrumental music, that’s when you use zxx.
RDA allows you to do either way. You can spell out copyright if that’s how it’s presented, or you can use the copyright symbol, the C surrounded by a circle. Either way is fine as far as RDA is concerned.
Yes, that is correct. You may change the type code. If for some reason the system prevents you from changing the type code when it’s incorrect, feel free to send a message to bibchange@oclc.org, or use the request correction function in Record Manager, or report error in Connexion, and this will go into that bibchange inbox, and it will be dealt with as swiftly as we possibly can.
Now, there are instances where a cataloger may legitimately decide that one type is more important than the other, for example, a printed book that’s accompanied by an audio recording of someone reading the book—that could also be considered an audio recording of someone reading the book, accompanied by the printing book. And so there may be legitimate differences in how people decide to code something, in which case there may be two records in WorldCat. But if indeed there is a mistake, for example, if a map is coded as a book, feel free to change it and/or report it to us so we can change it.
You could code one in the projection fixed field, and then also make a note in the record that there are two different projections.
Editing the type and bib level fixed fields depends on the role or authorization that you have. If you have full level, then you can edit your local copy. If you have the full-level authorization role, then in the bibliographic records, if it’s coded as less than full, you’re able to change those. If it’s your record that you’ve input, then you’re able to change those. But authenticated records, you would not be able to change that, and those you can submit to bibchange@oclc.org, and the bibchange staff would be able to help.
It really is about the fullness. Depending on the encoding level, if it is a full-level record, then you probably won’t be able to change it, but if it’s less than full, then you can. And in regards to if you created the record, your library, and you’re the only one that has holdings to it, you can also change the type code.
Not necessarily. Nowadays, under RDA, it’s quite common to have the date of publication and the copyright data being the same, so that DtSt is coded t, and both the date 1 and date 2 are the same, or they look the same. That’s because under RDA, the date of publication and copyright date are entirely different elements. That was not the case in AACR2 and other earlier descriptive instructions. So if you have a date of publication, even if you are deriving or inferring that date of publication from a copyright date, the date of publication and the copyright date, if they are the same, you use DtSt t and the same date in both date 1 and date 2.
I’m not sure exactly what you mean by a previous date existing in a note. In most cases, if not all cases, even under RDA, it will be mainly the most recent copyright date that will be considered to be the date of copyright, so I’m not sure exactly what you mean by a previous date existing in a note, but having DtSt t and the same date in both date 1 and date 2 is quite common these days under RDA.
A reproduction date, or the fact that something had been previously published, may or may not factor into the date of publication and the DtSt coding. You’d have to check the hierarchy of DtSt codes. I don’t remember offhand if t is higher than either of the reprint or other DtSt reproduction codes. I think it is higher, so DtSt code t would take precedence.
No. I know that’s a pretty brief answer, but within WorldCat, you are always looking at the current version of the record when you’re in a cataloging interface. We do have a file that is internal to OCLC called Journal History, so if there is a reason for us to look up a previous version of a record, we can do that here, and that helps us know what happened in the past for a record, but that is only available internally at OCLC right now.
My interpretation of the codes for that are that those codes about romanized and such only did apply to transcription from cards. That’s certainly my reading of it. But no, you do not need to code that, if you are not transcribing from a card.
That one’s a mystery to me! Perhaps if they could provide an example, you could send that to bibchange, and we could take a look to see what’s happening behind the scenes.
Yes. That would be great. That is definitely what we would like you to do. It is not required at this point—you can still use code I, but we would prefer that you use the blank, because at some point in the future, we will be changing those I-level records into blank, and converting all the records in the database that have level I to the correct, official MARC code.
I would say it is probably the code in the 008 field that controls with your local ILS display, but you would have to verify that with your local ILS documentation, because we don’t really have control over that.
I would also say that the code in the 040 subfield b and the code in the 008 language fixed field are totally different meanings. The language in the 008 fixed field is for the language of the material you are cataloging. The language in the 040 subfield b is the language of cataloging. In other words, are you cataloging in English, are you cataloging in French? What language are your notes in? If you’re cataloging in French, your notes will be in French, even if the material is English, which would then be the code in the 008 field.
Now, the 546 field is about the language of the resource, rather than the language of the cataloging, and that’s usually used when there’s something complicated that you want to explain, or some sort of translation involved. Usually, you don’t enter a 546 field, if it’s very clearly just one language for the resource.
Yes. When you have DtSt coded t, you usually see the 264 second indicator 1 for publication information, and 264 second indicator 4 for copyright. You usually do see that.
It’s really hard to generalize, because there is that hierarchy in the DtSt, both in MARC 21 and in Bibliographic Formats and Standards, there’s a hierarchy of which DtSt codes you use, so in the case of reprints—certain kinds of reprints anyway, and that’s explained in much more detail in DtSt and BFAS for reprints—the date of the reprint that you have at hand and the date of the original, if it’s presented as such, that DtSt code r may take precedence over DtSt code t. So just because there are two 264 fields, one with a date of publication and one with a date of copyright, that doesn’t necessarily mean you will be coding DtSt t.
When we say something is used for matching, there are two times when matching comes into play. This was machine matching we’re talking about. And that is when libraries load records into OCLC by data sync or some other batch process, the machine is using certain elements for matching. And then also, when we are trying to merge duplicate records by DDR, the duplicate detection and resolution software, that’s a machine, or software program, using certain elements for matching those records. So that’s what we mean by used for matching. It’s a machine process.
It wasn’t a change of policy per se. It was the evolution from AACR2 to RDA. In RDA, different kinds of information, even though they may be related, or may even be the same in some cases, different kinds of information are most often separate elements. So unlike under AACR2, under RDA, the date of publication is one kind of date element, and a copyright date is another kind of date element, and so those are considered to be related but separate elements, or pieces of data, and so that is why you’re seeing DtSt coded t and the same date in date 1 and date 2 in so many cases now, under RDA.
That’s a really good question! When it was coded in the past, for transcription from cards, it was important, and so those records that were converted from cards still have those coded elements, and that may be important at some point. So I’m not sure we want to make the element obsolete, because of that. But it is just not something that’s used currently.
If someone does want to make an element like this obsolete, they would need to propose it to the MARC Advisory Committee. That’s not something OCLC can determine.
Yes. If you catalog in English your notes ought to be in English, except for quoted notes. If you are quoting something from the resource, and the resource is not in English, then you may have a note that is in quotes, that is not in English. So if you are seeing things that are coded incorrectly with language of cataloging, and you’re seeing a pattern of those, with a particular library, feel free to report those to bibchange, and we’ll see what we can do about making corrections. Or if it’s just an individual record, you can probably change it yourself.
The best possible cataloging would be yes, it would be good for you to translate it, but I too have seen many records where you’re using a publisher-provided abstract in the 520 note, the summary note, and it’s not translated, it’s in the language of the item, but everything else is in the language of cataloging. If that’s all you’ve got, I say it’s acceptable. It’s not the best, but it’s acceptable.
It already has been. You can download it now.
This is not one of the resources listed in the back of the book. It might have been mentioned within one of the chapters, but if not there's always a place for it in the CCK7 (Cataloging Correctly for Kids, edition 7). I am marking it now so that the next authors have a chance to include that.
H1095 was updated in August 2021, while the other Subject Headings Manual documents were updated around 2013. I would guess that H1095 is the way that the Library of Congress wants to go, and they just haven't finished updating all of the documentation that needs to be updated. "Juvenile films" now would be for nonfiction and "Juvenile drama" would be for fiction films.
It doesn't look like there is a way to do so within Library of Congress Subject Headings (LCSH) or Sears. It would have to be through some other thesaurus or your own made-up thesaurus with second indicator 4 in a subject heading or a genre heading. There are youth films and youth television programs, other than that I'm not aware of anything in LCSH that allows you to use a subdivision "Young adult fiction" or the like.
A lot of copy cataloging records that are coming in do have the genre heading "Young adult fiction". This has not been investigated thoroughly as to how authentic that is, but it is being used out there as a locally created heading.
Another option for identifying young adult fiction in your catalogs may be through using your ILS. Some can facet by young adult, in the online catalog, as long as the participating libraries have located that particular resource in the Young Adult Room. So, there may be an opportunity to do it sometimes through your ILS features.
Yes, in fact, they already have. It used to be that when there were full printed editions, editors would be working on changes in the meantime, but sometimes they would get held back until a new print edition was put out, which seem to make less and less sense as we looked at it. WebDewey allows us, once a change is ready, to roll it out. The print-on-demand versions hopefully fill that hole and mean print isn't completely dead.
Yes, but there is extra information that goes in there too that was just approved by the MARC Advisory Committee at the beginning of this year. The documentation on the Library of Congress MARC pages is up-to-date, but we're still getting the word out on this one. There is different guidance based on whether you are using a print edition, where the content of it is just static or WebDewey.
The short version is that with WebDewey, you will cite the date in which you're cataloging combined with the history notes that are in WebDewey. Those are good ways, if you ever come across a number in the future and the number that's assigned to a resource doesn't make sense, you can refer to that data and maybe find that when the number was assigned, it was the right number even if the subject has since moved.
Yes, you are more than welcome to enter original materials in WorldCat. That is a huge use for scholars out in the world to be able to discover that the resource exists. Even if your the only one that holds it.
You can talk with your inter-library loan folks about how to have policies in place to make your library staff and patrons aware that something is not available for inter-library loan. Lots of libraries have policies that restrict certain materials from inter-library loan so that people can only come and use them onsite.
Wikipedia is a very good resource for name order. It helps that there is a really global base of volunteer editors. If you look at Wikipedia articles on a lot of famous Icelandic people, for example, there is usually a note at the top saying that it's an Icelandic name and that you typically refer to that person by what looks to us like their first name. It does this also for other cultures as well.
There also may be a Library of Congress authority record that represents that author that can be used.
IFLA has a document that lists naming conventions for many countries.
There are also special instructions in RDA for Iceland names as well (RDA Appendix F: Additional instructions on name of persons).
Yes, absolutely. In these, or any areas, we are really trying to prioritize community driven updates to the DDC that are best going to reflect the needs of your users. In the past years, we have been very open and welcome to suggestions from the community at large. It's been a very successful program. The Dewey Editorial Policy Committee has an opportunity to review these suggestions from all sorts of places, and we move on them pretty aggressively. For more information about how to get involved in this process, see oc.lc/Dewey Contributors.
They are published and made available along with other changes. If you are looking at a feed of changes in Web Dewey, it's not necessarily going to be clear what is changes that we've gotten from the community. We will sometimes highlight in a blog post specific contributions. We also have another related site (oc.lc/DeweyExhibits) where we post proposals that either come from myself or the community. You can also see proposals from the past few years there, and if you look at a specific proposal you can see where it's coming from.
I try to look to see if the resource suggests the audience level. If there isn't an audience level clearly indicated with the resource, I default to 'j' just because I don't feel that I can responsibly 'level' a resource.
PZ is juvenile fiction and juvenile other things, such as stories and rhymes and things like that. I haven't seen a lot with double LC classification in addition to PZ and PN, PR, PS. Generally, if what you have is juvenile fiction or juvenile stories, rhyme, or something like that, then that would be what you would want to go with. Of course, there would be different numbers for the language that the resource is in that are associated with that PZ, just like there are for the various PN, PR, PS adult numbers. Unless you are talking about poetry, in which case, I believe the PN, PS, PR would be more appropriate rather than PZ.
I don't remember seeing a follow-up to their survey when they were asking whether the letters that they use as if it were a cutter should be changed to the same or a similar practice to what they use for the non-PZ numbers. As far as I know, they are still using the letters instead of the cutters. If you have any questions about LC practice or the Library of Congress in general, you can send an email to the head of CYAK (Children's and Young Adult's Cataloging Program at the Library of Congress), Stacey Devine at cyacprog@loc.gov. They are very responsive and helpful with any sorts of questions you may have for them.
For my local catalog, when you are faced with something like that, you have options. It could be a book with a CD, a CD with a book, or it could be a kit. I really try to limit what I call a kit to something that is in more than two formats. So, in order for something like this to be a kit, I would have to have a book, a CD, plus a puppet, or a book, a CD, plus flashcards. With just two formats, it can go either way. What I try to do in my catalog is keep any audio plus book, just that, have it be an audio format with an accompanying book. Just to keep them consistent and the icons correct.
For Connexion, if the materials are purchased together or issued together by a publisher, definitely cataloging them together is a good idea. Local practice may vary between libraries. You're trying to determine what is the predominant material, when you're cataloging these. With some of them, it really is hard. It may be that you're using a book with an accompanying CD, or maybe the audio recording is more important than the book, but sometimes they're both equally important. So, you really are making choices about how to catalog them. Either choice is acceptable, a book with accompanying CD or a CD with accompanying book. I agree that if you only have those two pieces, you probably don't want to go with a kit. But, if you have more than two pieces, three types of formats, you will want to consider kit as an option. If they are issued separately, you may well want to catalog them separately, but in WorldCat (where Connexion is one of the ways to access WorldCat or catalog in WorldCat), you will probably see all three - kit, book with accompanying CD, or CD with accompanying book.
It seems like the situation here is that the books are often published by various publishers and the CDs were done by Recorded Books that's then issuing them together. So, in the case of the CD, Recorded Books is really the publisher, but in the case of the book that comes with the CD, Recorded Books is really maybe acting more as a distributor. Aside from that issue of what material predominates, and do I describe this as a book with accompanying CD or is it a CD with accompanying book. There is this is of, is it really issued together? It kind of is, but then it was published separately, so it's kind of not. This is something that we could really use the help of our colleague, Jay, and maybe we should have some discussion about this. We will follow-up with Jay Weitz and chat about this a little bit more and make sure that any decisions are added to the notes when they are available.
Subject Headings Manual H 1690 says to assign LC subject headings and subdivisions to topical materials for juveniles up through age 15 or ninth grade, and to use the juvenile subdivisions for those. So, if the work or resource is intended for ages above 15, then you would in theory treat it as just a regular general work or adult book without those juvenile subdivisions. Although actual practice on that may vary in the real world.
In local catalogs, you do have the opportunity to get the location as being young adult. So, when you are working with your local system, you might find a way to represent these resources as young adult and not use juvenile fiction, which sometimes looks a little awkward in young adult literature. Plus, if a young adult is looking for something, and they see it as juvenile fiction on it, they think it's for babies and they don't want it. I would suggest looking at your local systems and seeing if there's some type of solution you can come up with, with pairing faceting of location with subject headings or call numbers to responsibly label these young adult materials so that it's clear they are for young adults.
Yes, it is. We use the same printing software to prepare those as we did the old print editions. They're nice trade paperbacks rather than hard backs, but otherwise the look of it is very similar.
Since the children's subject headings aren't currently controllable, it means that maintenance has to be done manually or by automated tools that are essentially the same as manually. The expansion of the authorities for children, hasn't really had an impact as of yet. Aside from explicitly establishing things that were assumed to be established by their existence in LC, and now there's an explicit heading for the children's subject heading so that it's more clear that it's backed by an authority record even though it can't be controlled.
With genre headings, there are many types of thesauri and lists in usage. Make sure you are drawing from the same list all the time. When you are looking at that particular list and using a genre heading from that list, go by the guidance set up by the list you're using. So, go by what the genre heading list that you're using, the definitions, and their instructions for use, and that will usually take care of everything. If not, go to your common sense approach of using that particular genre heading.