COUNTER Code of Practice Release 5.0.1¶
COUNTER’s library and content provider members have contributed to the development of Release 5 (R5) of the COUNTER Code of Practice.
The Code of Practice enables content providers to produce consistent, comparable and credible usage data for their online content. This allows librarians and other interested parties to compare the usage data they receive, and to understand and demonstrate the value of the electronic resources to which they subscribe.
Release 5.0.1 (published 10 December 2018, with updates for Appendix E published 22 January 2019 and Appendix A published 20 February 2019) is the current Code of Practice and the requirement for COUNTER compliance effective from January 2019.
The Code of Practice is available from the COUNTER website as an interactive code. This online version is the version of record for Release 5 of the Code of Practice.
Foreword¶
Librarians spend considerable amounts of money licensing different types of online content and want to measure return on the investment and to ensure that library budgets are spent as productively as possible. One of the ways to measure this return on investment is to assess usage statistics.
This release of the COUNTER Code of Practice is designed to balance changing reporting needs with the need to make things simpler so that all content providers can achieve compliance and librarians can have usage statistics that are credible, consistent and comparable.
Consistency in report formats
Release 5 consists of four Master Reports. Each of the Master Reports is associated with several pre-set filtered Standard Views, but can also be examined from different viewpoints to suit the needs of the person working with the report. Librarians will be able to use Master Report to customize their analysis to meet their specific reporting need.
Consistency and clarity in metrics
Release 5 also introduces a new Metric Types which ensure flexibility and depth of reporting.
Flexibility
Flexibility is built into Release 5 with the introduction of attributes, pieces of information which can be associated with multiple metrics. Providing information about matters such as year of publication, access type, and data types means that users can roll up or drill down through reports with ease, eliminating the need for special purpose reports.
How do I use this Code of Practice?
The Code of Practice is available from the COUNTER website as an interactive code. This online version is the version of record for Release 5 of the Code of Practice.
You can download each of the sections in the Code of Practice.
In the navigation bar immediately below Search, clicking on Glossary, will provide a pop-up window with terms and definitions.
You can click the + or - key to increase or decrease the font size in the Code of Practice.
The Code of Practice will be of interest to both content providers and librarians, however some sections are more relevant to particular user cases.
Sections 1 and 2 provide an introduction and outline of the scope of the COUNTER Code of Practice.
Sections 3 and 4 provide an explanation of the Master Reports and Standard Views which are a requirement for COUNTER-compliance and that allow the librarian to filter and configure to create customized “views” of their usage data. Section 3 also explains Metric Types and Attributes.
Content Providers implementing Release 5
Sections 5 to 7 provide essential information. These sections give detail on the delivery of COUNTER-compliant reports and views, logging usage and processing rules. You will also want to refer to the Friendly Guide To Release 5 Technical Notes for Providers.
COUNTER compliance requires content hosts to implement COUNTER_SUSHI (the standardised model for harvesting online usage data). Section 8 provides the specifications for the RESTful COUNTER_SUSHI API and the methods that must be supported. Appendix F explains handling errors and exceptions.
Content Providers preparing for COUNTER audit
An important feature of the COUNTER Code of Practice is that compliant content providers must be independently audited on a regular basis in order to maintain their COUNTER compliant status. If you are preparing for a COUNTER audit, Section 9 explains the audit process and procedures. Appendix E explains audit requirements and tests.
COUNTER would like to acknowledge the support of UKSG in the publication of the Code of Practice Release 5.
Conventions¶
This Code of Practice is implemented using the following convention:
The keywords MUST, MUST NOT, REQUIRED, RECOMMENDED, and OPTIONAL in this document are to be interpreted as described in RFC 2119.
Note that the force of these words is modified by the requirement level of the document in which they are used.
MUST (or REQUIRED) means that the definition is an absolute requirement of the specification.
MUST NOT means that the definition is an absolute prohibition of the specification.
RECOMMENDED means that there may be valid reasons in certain circumstances to ignore a particular item, but the full implications should be understood and carefully weighed before choosing a different course.
NOT RECOMMENDED means that there may be valid reasons in certain circumstances when the particular behaviour is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behaviour described with this label.
Content providers implementing the Code of Practice who feel they have a valid disagreement with a requirement of the code are requested to present their case in writing to the COUNTER Project Director and ask for clarification on interpretation of the code.
Text appearing in italic will be replaced with appropriate values at implementation time, terms enclosed in curly brackets are variables. For example, Exception in the format “{Exception Number}: {Exception Description}” might resolve to “3030: No Usage Available for Requested Dates”.
Introduction¶
Since its inception in 2002, COUNTER has been focused on providing a code of practice that helps ensure librarians have access to consistent, comparable, and credible usage reporting for their online scholarly information. COUNTER serves librarians, content providers, and others by facilitating the recording and exchange of online usage statistics. The COUNTER Code of Practice provides guidance on data elements to be measured and definitions of these data elements, as well as guidelines for output report content and formatting and requirements for data processing and auditing. To have their usage statistics and reports designated COUNTER compliant, content providers MUST provide usage statistics that conform to the current Code of Practice.
General Information¶
Purpose¶
The purpose of the COUNTER Code of Practice is to facilitate the recording, exchange, and interpretation of online usage data by establishing open international standards and protocols for the provision of content-provider-generated usage statistics that are consistent, comparable, and credible.
Scope¶
This COUNTER Code of Practice provides a framework for the recording and exchange of online usage statistics for the major categories of e-resources (journals, databases, books, reference works, and multimedia databases) at an international level. In doing so, it covers the following areas: data elements to be measured, definitions of these data elements, content and format of usage reports, requirements for data processing, requirements for auditing, and guidelines to avoid duplicate counting.
Application¶
COUNTER is designed for librarians, content providers and others who require reliable online usage statistics. The guidelines provided by this Code of Practice enable librarians to compare statistics from different platforms, to make better-informed purchasing decisions, and to plan more effectively. COUNTER also provides content providers with the detailed specifications they need to follow to generate data in a format useful to their customers, to compare the relative usage of different delivery channels, and to learn more about online usage patterns. COUNTER also provides guidance to others interested in information about online usage statistics.
Strategy¶
COUNTER provides an open Code of Practice that evolves in response to the demands of the international library and content provider communities. The Code of Practice is continually under review; feedback on its scope and application are actively sought from all interested parties. See Section 12 below.
Governance¶
The COUNTER Code of Practice is owned and developed by Counter Online Metrics (COUNTER), a non-profit distributing company registered in England. A Board of Directors governs Counter Online Metrics. An Executive Committee reports to the Board, and the day-to-day management of COUNTER is the responsibility of the Project Director.
Definitions¶
This Code of Practice provides definitions of data elements and other terms that are relevant, not only to the usage reports specified in Release 5 (R5), but also to other reports that content providers may wish to generate. Every effort has been made to use existing ISO, NISO, etc. definitions where appropriate, and these sources are cited (see Appendix A).
Versions¶
The COUNTER Code of Practice will be extended and upgraded as necessary based on input from the communities it serves. Each new version will be made available as a numbered release on the COUNTER website; users will be alerted to its availability. R5 of the Code of Practice replaces Release 4 (R4) of the Code of Practice. The deadline date for implementation of this Release is 01-Jan-2019. After this date, only those content providers compliant with R5 will be deemed compliant with the Code of Practice.
COUNTER R5 introduces a continuous maintenance process (see Section 12 below) that will allow the Code of Practice to evolve over time minimizing the need for major version changes.
Auditing and COUNTER Compliance¶
An independent annual audit is REQUIRED of each content provider’s reports and processes to certify that they are COUNTER compliant. The auditing process is designed to be simple, straightforward and not unduly burdensome or costly to the content provider while providing reassurance to customers of the reliability of the COUNTER usage data. See Section 9 below and Appendix E for more details.
Relationship to other Standards, Protocols and Codes¶
The COUNTER Code of Practice builds on several existing industry initiatives and standards that address content provider-based online performance measures. Where appropriate, definitions of data elements and other terms from these sources have been used in this Code of Practice, and these are identified in Appendix A.
Making Comments on the Code of Practice¶
The COUNTER Executive Committee welcomes comments on the Code of Practice (see Section 12 below).
Changes from COUNTER Release 4¶
Changes in the nature of online content and how it is accessed have resulted in the COUNTER Code of Practice evolving in an attempt to accommodate those changes. This evolution resulted in some ambiguities and, in some cases, conflicts and confusions within the Code of Practice. R5 of the COUNTER Code of Practice is focused on improving the clarity, consistency, and comparability of usage reporting.
List of Reports¶
R5 of the COUNTER Code of Practice reduces the overall number of reports by replacing many of the special-purpose reports that are seldom used with a small number of flexible generic reports. All COUNTER R4 reports have either been renamed or eliminated in favour of other COUNTER R5 report options.
See Appendix B, Section B.1.1 for more details.
Report Format¶
The Standardized Usage Statistics Harvesting Initiative (SUSHI) protocol used in R4 was designed to simplify the gathering of usage statistics by librarians. In R5 the SOAP/XML based SUSHI protocol is replaced with the RESTful COUNTER_SUSHI API that uses JavaScript Object Notation (JSON) for a more lightweight data-interchange. The JSON format not only is easy for humans to read and write, but it is easy for machines to parse and generate. Support of the COUNTER_SUSHI API is mandatory for compliance with R5 (see Section 8 below).
With R5, all COUNTER reports are structured the same way to ensure consistency, not only between reports, but also between the JSON and tabular versions of the reports. Now, all reports share the same format for the header, the report body is derived from the same set of element names, total rows have been eliminated, and data values are consistent between the JSON and tabular version. R5 also addresses the problems of terminology and report layouts varying from report to report, as well as JSON and tabular versions of the same report producing different results while still being compliant.
Metric Types¶
R5 strives for simplicity and clarity by reducing the number of Metric_Types and applying these Metric_Types across all reports, as applicable. With R4, Book Reports had metric types that could be considered different from metric types in Journal Reports and metric types attempting to reflect additional attributes such as mobile usage, usage by format, etc. Most R4 metric types have either been renamed or eliminated in favour of new R5 Metric_Types.
See Appendix B, Section B.1.3 for a table showing the R4 metric types and their R5 equivalence or status.
New Elements and Attributes Introduced¶
With R4 the nature of the usage sometimes had to be inferred based on the name of the report. To provide more consistent and comparable reporting, R5 introduces some additional attributes that content providers can use to create breakdowns and summaries of usage.
Access_Type |
Used to track usage of content that is either OA_Gold (Gold Open Access) or Controlled (requires a license). |
Access_Method |
Used to track if the purpose of the access was for regular use or for text and data mining (TDM). This attribute allows TDM usage to be excluded from Standard Views and reported on separately. |
Data_Type |
Identifies the type of content usage being reported on. Expanded to include additional Data_Types, including Article, Book, Book_Segment, Database, Dataset, Journal, Multimedia, Newspaper_or_Newsletter, Other, Platform, Report, Repository_Item, and Thesis_or_Dissertation. |
Publisher_ID |
Introduced to improve matching and reporting by publisher. |
Section_Type |
Identifies the type of section that was accessed by the user, including Article, Book, Chapter, Other and Section. Used primarily for reporting on book usage where content is delivered by section. |
YOP |
Year of publication as a single element, simplifies reporting by content age. |
The above items are covered in more detail in Section 3 below as well as in Appendix B, Section B.1.4.
Overview¶
This section provides an overview of the scope of the COUNTER Code of Practice.
Section 3 Technical Specifications for COUNTER Reports introduces the REQUIRED reports, describes the common format shared by all COUNTER reports, and defines the COUNTER report attributes and their values.
Section 4 COUNTER reports provides detailed specifications for each COUNTER report. Use this section to understand what elements are included in each report.
Section 5 Delivery of COUNTER Reports outlines the options a content provider MUST provide to enable customers to access their reports.
Section 6 Logging Usage describes various options used for logging usage transactions.
Section 7 Processing Rules for Underlying COUNTER Reporting Data discusses topics such as which return codes to count, double-click filtering, calculating unique items and unique titles accessed in a session, classifying searches (regular, federated, automated, or platform), robots and internet crawlers, tools that cause bulk downloads, and text and data mining.
Section 8 SUSHI for Automated Report Harvesting offers a more in-depth description of the REQUIRED COUNTER_SUSHI API support.
Section 9 Audit provides the requirements for the COUNTER audit.
Section 10 Other Compliance Topics talks about license language to require COUNTER usage statistics, confidentiality of data, and supporting consortia in their need to obtain usage data for their members.
Section 11 Extending the Code of Practice offers suggestions for content providers who may want to create custom reports or include additional elements and attribute values in COUNTER reports.
Section 12 Continuous Maintenance outlines the procedures that have been put in place to allow the Code of Practice to be amended and expanded on an incremental basis in a controlled and managed way.
Section 13 Transitioning from Previous Releases or to New Reporting Services describes the procedures and requirements for transitioning to a new reporting service or underlying logging system and for transitioning to a new COUNTER release, in particular from R4 to R5.
Section 14 Change History provides a list of the Code of Practice releases.
Technical Specifications for COUNTER Reports¶
COUNTER Reports for Libraries¶
Reports for R5 consist of four Master Reports that allow the librarian to filter and configure to create customized views of their usage data. R5 also specifies Standard Views (pre-set filters/configuration).
To achieve compliance, a content provider MUST offer the Master Reports and Standard Views that are applicable to their Host_Types, with the exception of Standard Views that always would be empty (e.g. an Access Denied Standard View if denials cannot occur). An independent audit is required for these reports.
Content providers may offer additional Master Reports and Standard Views not required for compliance or custom reports (see Section 11.2), according to the rules set for the reports by the Code of Practice. For these reports an audit isn’t required.
Master Reports¶
Master Reports include all relevant metrics and attributes; they are intended to be customizable through the application of filters and other configuration options, allowing librarians to create a report specific to their needs. The four Master Reports are shown in Table 3.a along with their Report_ID, Report_Name and Host_Types who are REQUIRED to provide these reports. See Section 3.3.1 below for details on Host_Types.
Table 3.a (below): Master Reports
Report_ID |
Report_Name |
Details |
Host_Types |
---|---|---|---|
PR |
Platform Master Report |
A customizable report summarizing activity across a content provider’s platforms that allows the user to apply filters and select other configuration options. |
All Host_Types: |
DR |
Database Master Report |
A customizable report detailing activity by database that allows the user to apply filters and select other configuration options. |
A&I_Database |
TR |
Title Master Report |
A customizable report detailing activity at the title level (journal, book, etc.) that allows the user to apply filters and select other configuration options. |
Aggregated_Full_Content |
IR |
Item Master Report |
A granular, customizable report showing activity at the level of the item (article, chapter, media object, etc.) that allows the user to apply filters and select other configuration options. |
Data_Repository* |
* Data repositories may choose to conform to the Code of Practice Release 5 or, alternatively, may wish to work with the Code of Practice for Research Data.
Figure 3.a (below) provides an example of how the user interface could look. The user will be presented with an interface that allows them to select usage dates, one or more Metric_Types, Data_Types, Access_Types, etc. and indicate if the filter columns are to be included. Including the column will cause usage to be broken out by individual values for the selected filter, whereas not including the column will result in usage being summarized for the selected filter.
Figure 3.a: Example of a user interface
Standard Views¶
The goal of Standard Views is to provide a set of pre-filtered views of the Master Reports covering the most common set of library needs. Report_IDs for Standard Views are derived from the Report_ID of the Master Report that they are based on. The format is {Master Report_ID}_{View ID}.
Platform Usage Standard Views¶
The Platform Usage Standard Views are derived from the Platform Master Report and provide a summary of activity on a given platform to support the evaluation of platforms and to provide high-level statistical data to support surveys and reporting to funders.
Table 3.b (below): Platform Usage Standard Views
Report_ID |
Report_Name |
Details |
Host_Types |
---|---|---|---|
PR_P1 |
Platform Usage |
Platform-level usage summarized by Metric_Type. |
All Host_Types: |
*Data repositories may choose to conform to the Code of Practice Release 5 or, alternatively, may wish to work with the Code of Practice for Research Data.
See Section 4.1 below for details on Platform Usage Reports.
Database Usage Standard Views¶
The Database Usage Standard Views support the evaluation of the value of a given database of resources (e.g. a full-text database, an A&I database, or a multimedia collection).
Table 3.c (below): Database Usage Standard Views
Report_ID |
Report_Name |
Details |
Host_Types |
---|---|---|---|
DR_D1 |
Database Search and Item Usage |
Reports on key Searches, Investigations and Requests metrics needed to evaluate a database. |
A&I_Database |
DR_D2 |
Database Access Denied |
Reports on Access Denied activity for databases where users were denied access because simultaneous-use licenses were exceeded or their institution did not have a license for the database. |
A&I_Database |
See Section 4.2 below for details on Database Usage Reports.
Title Usage Standard Views¶
Title Usage Standard Views are used to support the evaluation of the value of a given serial (e.g. journal, magazine, or newspaper) or monograph (e.g. book, eBook, textbook, or reference work) title.
Table 3.d (below): Title Usage Standard Views
Report_ID |
Report_Name |
Details |
Host_Types |
---|---|---|---|
TR_B1 |
Book Requests (Excluding OA_Gold) |
Reports on full-text activity for books, excluding Gold Open Access content, as Total_Item_Requests and Unique_Title_Requests. The Unique_Title_Requests provides comparable usage across book platforms. The Total_Item_Requests shows overall activity; however, numbers between sites will vary significantly based on how the content is delivered (e.g. delivered as a complete book or by chapter). |
Aggregated_Full_Content |
TR_B2 |
Book Access Denied |
Reports on Access Denied activity for books where users were denied access because simultaneous-use licenses were exceeded or their institution did not have a license for the book. |
Aggregated_Full_Content |
TR_B3 |
Book Usage by Access Type |
Reports on book usage showing all applicable Metric_Types broken down by Access_Type. |
Aggregated_Full_Content |
TR_J1 |
Journal Requests (Excluding OA_Gold) |
Reports on usage of journal content, excluding Gold Open Access content, as Total_Item_Requests and Unique_Item_Requests. The Unique_Item_Requests provides comparable usage across journal platforms by reducing the inflationary effect that occurs when an HTML full text automatically displays and the user then accesses the PDF version. The Total_Item_Requests shows overall activity. |
Aggregated_Full_Content |
TR_J2 |
Journal Access Denied |
Reports on Access Denied activity for journal content where users were denied access because simultaneous-use licenses were exceeded or their institution did not have a license for the title. |
Aggregated_Full_Content |
TR_J3 |
Journal Usage by Access Type |
Reports on usage of journal content for all Metric_Types broken down by Access_Type. |
Aggregated_Full_Content |
TR_J4 |
Journal Requests by YOP (Excluding OA_Gold) |
Breaks down the usage of journal content, excluding Gold Open Access content, by year of publication (YOP), providing counts for the Metric_Types Total_Item_Requests and Unique_Item_Requests. Provides the details necessary to analyze usage of content in backfiles or covered by perpetual access agreements. Note that COUNTER reports do not provide access model or perpetual access rights details. |
Aggregated_Full_Content |
See Section 4.3 below for details on Title Usage Standard Views.
Item Usage Standard Views¶
The Standard Views for item-level reporting are designed to support the most common reporting needs. The Standard View for repositories (Journal Article Requests) provides insight into the usage of individual journal articles. The Standard View for multimedia (Multimedia Item Requests) allows evaluation of multimedia at the title level.
Table 3.e (below): Item Usage Standard Views
Report_ID |
Report_Name |
Details |
Host_Types |
---|---|---|---|
IR_A1 |
Journal Article Requests |
Reports on journal article requests at the article level. This report is limited to content with a Data_Type of Article, Parent_Data_Type of Journal, and Metric_Types of Total_Item_Requests and Unique_Item_Requests. This Standard View must be provided only if (a) it is clear for all articles in IR whether they are journal articles or not and (b) the parent item is known for all journal articles. |
Repository |
IR_M1 |
Multimedia Item Requests |
Reports on multimedia requests at the item level. |
Multimedia |
See Section 4.4 below for details on Item Usage Reports.
Formats for COUNTER Reports¶
R5 reports can be delivered in tabular form or as machine-readable data (JSON) via the COUNTER_SUSHI API. The tabular form MUST be either Excel or a tab-separated-value (TSV) file. The reports in JSON and TSV format MUST be encoded using UTF-8. The JSON format MUST comply with the COUNTER_SUSHI API Specification (see Section 8 below).
All COUNTER reports have the same layout and structure. Figure 3.b (below) provides an example of the “Journal Requests (Excluding OA_Gold)” Standard View. Figure 3.c (below) shows the layout for tabular reports, which will be the focus of the discussions throughout this document. Note that the COUNTER_SUSHI API Specification includes the same elements with the same or similar names; therefore, understanding the tabular reports translates to an understanding of what is REQUIRED in reports retrieved via the COUNTER_SUSHI API.
Figure 3.b: Sample “Journal Requests (Excluding OA_Gold)” Standard View
Figure 3.c: Layout for tabular COUNTER reports
All COUNTER reports have a header. In tabular reports, the header is separated from the body with a blank row (to facilitate sorting and filtering in Excel). Beneath that is the body of the report with column headings. The contents of the body will vary by report. Figure 3.c (above) identifies the different kinds of information you may find in the report and the relative positioning of this information. All of this is discussed in more detail below.
Report Header¶
The first 12 rows of a tabular COUNTER report contain the header, and the 13th row is always blank. The header information is presented as a series of name-value pairs, with the names appearing in Column A and the corresponding values appearing in Column B. All tabular COUNTER reports have the same names in Column A. Column B entries will vary by report.
Figure 3.d: Common Report Header Information
Figure 3.d (above) shows the layout of the common header. The 12 elements in Column A and the values in Column B are discussed in more detail in the table below. Note that the element names (Column A) MUST appear in the COUNTER report exactly as they are shown here. Capitalization, spelling, and punctuation MUST match exactly.
Table 3.f (below): COUNTER Report Header Elements
Element Name |
Description of value to provide |
Example |
---|---|---|
Report_Name |
The name of the report as it appears in Section 3.1. |
Journal Requests (Excluding OA_Gold) |
Report_ID |
The unique identifier for the report as it appears in Section 3.1. |
TR_J1 |
Release |
The COUNTER release this report complies with. |
5 |
Institution_Name |
For subscription-based services, the name of the institution to which the usage is attributed. For OA publishers and repositories, where it is not possible to identify usage by individual institutions, the usage should be attributed to “The World”. |
Mt. Laurel University |
Institution_ID |
A series of identifiers that represent the institution in the format of {namespace}:{value}. Include multiple identifiers by separating with a semicolon-space (“; ”). Permitted identifier namespaces are ISIL, ISNI, OCLC and, for local identifiers assigned by the content provider, the platform ID of the content provider. |
ISNI:0000000419369078; pubsiteA:PrncU |
Metric_Types |
A semicolon-space delimited list of Metric_Types requested for this report. Note that even though a Metric_Type was requested, it might not be included in the body of the report if no report items had usage of that type. |
Unique_Item_Investigations; Unique_Item_Requests |
Report_Filters |
A series of zero or more report filters applied on the reported usage, excluding Metric_Type, Begin_Date and End_Date (which appear in separate rows in the tabular reports for easier reading). Typically, a report filter affects the amount of usage reported. Entries appear in the form of {filter name}={filter value} with multiple filter name-value pairs separated with a semicolon-space (“; ”) and multiple filter values for a single filter name separated by the vertical pipe (“|”) character. |
Access_Type=Controlled; Access_Method=Regular |
Report_Attributes |
A series of zero or more report attributes applied to the report. Typically, a report attribute affects how the usage is presented but does not change the totals. Entries appear in the form of {attribute name}={attribute value} with multiple attribute name-value pairs separated with a semicolon-space (“; ”) and multiple attribute values for a single attribute name separated by the vertical pipe (“|”) character. |
Attributes_To_Show=Access_Type |
Exceptions |
An indication of some difference between the usage that was requested and the usage that is being presented in the report. The format for the exception values is “{Exception Number}:{Exception Description} ({Data})” with multiple exception values separated by semicolon-space (“; ”). The Exception Number and Exception Description MUST match values provided in Table F.1 of Appendix F. The Data is OPTIONAL. Note that for tabular reports, only the limited set of exceptions where usage is returned will apply. |
3031: Usage Not Ready for Requested Dates (request was for 2016-01-01 to 2016-12-31; however, usage is only available to 2016-08-31) |
Reporting_Period |
The date range for the usage represented in the report, in the form of: “Begin_Date=yyyy-mm-dd; End_Date=yyyy-mm-dd”. |
Begin_Date=2016-01-01; End_Date=2016-08-30 |
Created |
The date and time the usage was prepared, in RFC3339 date-time format (yyyy-mm-ddThh:mm:ssZ). |
2016-10-11T14:37:15Z |
Created_By |
The name of the organization or system that created the COUNTER report. |
EBSCO Information Services |
(blank row) |
Row 13 MUST be blank. |
Report Body¶
Figures 3.b and 3.c (above) show the body of the COUNTER reports containing an extensive array of data elements. Not all reports will include all elements. When formatting a report, maintain the order of elements described below, but only include those elements relevant to that report. Where practical, the discussion below will provide guidance as to which reports an element may be included in. See Section 4 below for an extensive mapping of elements to reports.
Report Item Description
Every COUNTER report will have columns that describe its report items.
Table 3.g (below): Elements that Describe the Report Item
Element Name |
Description |
Reports |
Examples |
---|---|---|---|
Database |
Name of database for which usage is being reported. Applies only to Database Reports. |
DR |
MEDLINE |
Title |
Name of the book or journal for which usage is being reported. Applies only to Title Reports. |
TR |
Journal of Economics |
Item |
Name of the article, book chapter, multimedia work, or repository item for which usage is being reported. Applies only to Item Reports. |
IR |
CRISPR gene-editing tested in a person for the first time |
Publisher |
Name of the publisher of the content item. Note that when the content item is a database, the publisher would be the organization that creates that database. |
DR, TR, IR |
Taylor & Francis |
Publisher_ID |
A unique identifier for the publisher in the form of {namespace}:{value}. When multiple identifiers are available for a given publisher, include all identifiers separated with semicolon-space (“; ”), but only one per type. Permitted identifier namespaces are ISNI and, for local identifiers assigned by the content provider, the platform ID of the content provider. |
DR, TR, IR |
ISNI:1234123412341234 |
Platform
The next column in the report identifies the platform where the activity happened.
Table 3.h (below): Elements that Identify the Platform
Element Name |
Description |
Reports |
Examples |
---|---|---|---|
Platform |
Identifies the platform/content host where the activity took place. Note that in cases where individual titles or groups of content have their own branded user experience but reside on a common host, the identity of the underlying common host MUST be used as the Platform. |
All reports: |
EBSCOhost |
Report Item Identifiers
The item being reported on is further identified by the columns to the right of the platform.
Table 3.i (below): Elements for Report Item Identifiers
Element Name |
Description |
Reports |
Examples |
---|---|---|---|
Authors |
Authors of the work for which usage is being reported in the format {author name} ({author identifier}) with author identifier in the format {namespace}:{value}. Permitted identifier namespaces are ISNI and ORCID. A maximum of three authors should be included with multiple authors separated by semicolon-space (“; ”). Note that this element is only used in tabular reports, in JSON reports authors are represented as Item_Contributors with Type Author. |
IR |
John Smith (ORCID:0000-0001-2345-6789) |
Publication_Date |
Date of publication for the work in the format yyyy-mm-dd. |
IR |
2018-09-05 |
Article_Version |
ALPSP/NISO code indicating the version of the parent work. Possible values are the codes for Accepted Manuscript, Version of Record, Corrected Version of Record, and Enhanced Version of Record. |
IR |
VoR |
DOI |
Digital Object Identifier for the item being reported on in the format {DOI prefix}/{DOI suffix}. |
TR, IR |
10.1629/uksg.434 |
Proprietary_ID |
A proprietary ID assigned by the content provider for the item being reported on. Format as {namespace}:{value} where the namespace is the platform ID of the host which assigned the proprietary identifier. |
DR, TR, IR |
publisherA:jnrlCode123 |
ISBN |
International Standard Book Number in the format ISBN-13 with hyphens. |
TR, IR |
978-3-16-148410-0 |
Print_ISSN |
International Standard Serial Number assigned to the print instance of a serial publication in the format nnnn-nnn[nX]. |
TR, IR |
0953-1513 |
Online_ISSN |
International Standard Serial Number assigned to the online instance of a serial publication in the format nnnn-nnn[nX]. |
TR, IR |
2048-7754 |
Linking_ISSN |
International Standard Serial Number that links together the ISSNs assigned to all instances of a serial publication in the format nnnn-nnn[nX] (JSON reports only). |
TR, IR |
0953-1513 |
URI |
Universal Resource Identifier, a valid URL or URN according to RFC 3986. |
TR, IR |
Parent Item Description and Identifiers
When reporting usage on content items like articles and book chapters, it is often desirable to identify the item’s parent item, such as the journal or book it is part of. This next grouping of columns identifies the parents and is used by a small subset of reports.
Table 3.j (below): Elements that Describe a Parent Item
Element Name |
Description |
Reports |
Examples |
---|---|---|---|
Parent_Title |
Title of the parent item. |
IR |
The Serials Librarian |
Parent_Authors |
Authors of the parent work. See the Authors element in Table 3.i for the format. |
IR |
|
Parent_Publication_Date |
Date of publication for the parent work in the format yyyy-mm-dd. |
IR |
|
Parent_Article_Version |
ALPSP/NISO code indicating the version of the parent work. Possible values are the codes for Accepted Manuscript, Version of Record, Corrected Version of Record, and Enhanced Version of Record. |
IR |
VoR |
Parent_Data_Type |
Identifies the nature of the parent. |
IR |
Journal |
Parent_DOI |
DOI assigned to the parent item in the format {DOI prefix}/{DOI suffix}. |
IR |
|
Parent_Proprietary_ID |
A proprietary ID that identifies the parent item. Format as {namespace}:{value} where the namespace is the platform ID of the host which assigned the proprietary identifier. |
IR |
TandF:wser20 |
Parent_ISBN |
ISBN of the parent item in the format ISBN-13 with hyphens. |
IR |
|
Parent_Print_ISSN |
Print ISSN assigned to the parent item in the format nnnn-nnn[nX]. |
IR |
0361-526X |
Parent_Online_ISSN |
Online ISSN assigned to the parent item in the format nnnn-nnn[nX]. |
IR |
1541-1095 |
Parent_URI |
URI (valid URL or URN according to RFC 3986) for the parent item. |
IR |
https://www.tandfonline.com/action/journalInformation?journalCode=wser20 |
Component Item Description and Identifiers
Repositories often store multiple components for a given repository item. These components could take the form of multiple files or datasets, which can be identified and usage reported on separately in Item Master Reports. Note that the component usage may only be reported for Total_Item_Investigations and Total_Item_Request. For other Metric_Types the usage cannot be broken down by component and the corresponding cells MUST be empty.
Table 3.k (below): Elements that Describe a Component Item
Element Name |
Description |
Reports |
Examples |
---|---|---|---|
Component_Title |
Name or title of the component item. |
IR |
|
Component_Authors |
Authors of the component item. See the Authors element in Table 3.i for the format. |
IR |
|
Component_Publication_Date |
Date of publication for the component item in the format yyyy-mm-dd. |
IR |
|
Component_Data_Type |
Data type of the component item. |
IR |
|
Component_DOI |
DOI assigned to the component item in the format {DOI prefix}/{DOI suffix}. |
IR |
|
Component_Proprietary_ID |
A proprietary ID assigned by the repository to uniquely identify the component. Format as {namespace}:{value} where the namespace is the platform ID of the repository which assigned the proprietary identifier. |
IR |
|
Component_ISBN |
ISBN that is assigned to the component item in the format ISBN-13 with hyphens. |
IR |
|
Component_Print_ISSN |
Print ISSN that is assigned to the component item in the format nnnn-nnn[nX]. |
IR |
|
Component_Online_ISSN |
Online ISSN that is assigned to the component item in the format nnnn-nnn[nX]. |
IR |
|
Component_URI |
URI (valid URL or URN according to RFC 3986) assigned to the component item. |
IR |
Item and Report Attributes
Table 3.l (below): Elements for Item and Report Attributes
Element Name |
Description |
Reports |
Examples |
---|---|---|---|
Data_Type |
Nature of the content that was used. See Section 3.3.2 for more detail. |
PR, DR, TR, IR |
Book |
Section_Type |
When content is accessed in chunks or sections, this attribute describes the nature of the content unit. See Section 3.3.3 for more detail. |
TR |
Article |
YOP |
Year of publication for the item being reported on. See Section 3.3.7 for more detail. |
TR, IR |
1997 |
Access_Type |
See Section 3.3.5 for more detail. |
TR, IR |
Controlled |
Access_Method |
See Section 3.3.6 for more detail. |
PR, DR, TR, IR |
Regular |
Metric Type
Table 3.m (below): Report Element for Metric_Type
Element Name |
Description |
Reports |
Examples |
---|---|---|---|
Metric_Type |
The type of activity that is being counted. See Section 3.3.4 for more detail. |
All reports: |
Total_Item_Investigations |
Usage Data
Table 3.n (below): Elements for Usage Data
Element Name |
Description |
Reports |
Examples |
---|---|---|---|
Reporting_Period_Total |
Total of usage in this row for all months covered. Note that this element does NOT appear in the JSON reports, instead the JSON format offers a Granularity report attribute (see Section 3.3.8 for details). |
All reports: |
123456 |
Mmm-yyyy |
A series of columns with usage for each month covered by the report. The format is Mmm-yyyy. Note: In the JSON format this is represented by Begin_Date and End_Date date elements for each month. |
All reports: |
May-2016 |
COUNTER Report Common Attributes and Elements¶
Early releases of the COUNTER Code of Practice focused on usage statistics related to journals. That was expanded to books, and later articles and multimedia collections were added. R5 further expands the scope of COUNTER into the area of research data and social media. In order to help organize this increased scope in a single, consistent, and coherent Code of Practice, several new elements and attributes have been added.
Host Types¶
Usage reports are provided by many different types of content hosts ranging from eBook to A&I_Database, eJournal, Discovery_Service, Multimedia etc. The usage reporting needs vary by Host_Type. To accommodate this variance, the R5 defines a set of Host_Type categories. Although the Host_Type does not appear on the COUNTER report, the Code of Practice uses Host_Types throughout this document to help content providers identify which reports, elements, metric types, and attributes are relevant to them. The Host_Types are:
Table 3.o (below): List of Host_Type Values
Host_Type |
Description |
Examples |
---|---|---|
A&I_Database |
Provides access to databases containing abstract and index information on scholarly articles intended to support discovery. |
APA |
Aggregated_Full_Content |
Provides access to aggregated pre-set databases of full text and other content where content is accessed in the context of the licensed database. |
EBSCOhost |
Data_Repository |
Includes subject repositories, institution, etc. |
UK Data Service - ReShare |
Discovery_Service |
Assists users with discovery of scholarly content by providing access to a central index of articles, books, and other metadata. |
EBSCOhost (EDS) |
eBook |
Provides access to eBook content made available as individual eBooks or eBook packages. |
EBL |
eBook_Collection |
Provides access to eBook content that is sold as fixed collections and behaves like databases. |
EBSCOhost |
eJournal |
Provides access to online serial (journals, conferences, newspapers, etc.) content made available as individual titles or packages. |
ScienceDirect |
Full_Content_Database |
Provides access to databases that are a collection of content items that are not otherwise part of a serial or monograph (i.e. non-aggregated). |
Cochrane |
Multimedia |
Provides access to audio, video, or other multimedia content. |
Alexander Street Press |
Multimedia_Collection |
Provides access to multimedia materials sold as and accessed like databases. |
|
Repository |
Provides access to an institution’s research output. Includes subject repositories, institution, department, etc. |
Cranfield CERES |
Scholarly_Collaboration_Network |
A service used by researchers to share information about their work. |
Mendeley |
Note that a given content host may be classified as having multiple Host_Types and would be expected to provide reports, metric types, elements, and attributes applicable to all. For example, EBSCOhost would be classified as A&I_Database, Aggregated_Full_Content, Discovery_Service, eBook, and eBook_Collection.
Data Types¶
R5 reports on scholarly information in many ways. These major groupings, referred to as Data_Types, are listed in the table below along with the Host_Types and reports that they apply to. All Data_Types apply to the Platform Reports since they summarize the usage on the platform. Note that the table lists only Host_Types required to provide one or more reports for compliance, but that content providers may offer additional reports. For example, Host_Type eJournal might also offer IR and IR_A1 and would then use Data_Type Article in these reports.
Table 3.p (below): List of Data_Type Values
Data_Type |
Description |
Host_Types |
Reports |
---|---|---|---|
Article |
An article, typically published in a journal or reference work. Note that Data_Type Article is only applicable for Item Reports when the article is the item, in Title Reports this is represented by the Section_Type. |
Repository |
PR, IR |
Book |
A monograph text. |
A&I_Database |
PR, DR, TR, IR |
Book_Segment |
A book segment (e.g. chapter, section, etc.). Note that Data_Type Book_Segment is only applicable for Item Reports when the book segment is the item, in Title Reports this is represented by the Section_Type. |
Repository |
PR, IR |
Database |
A fixed database where content is searched and accessed in the context of the database. A given item on the host may be in multiple databases but a transaction must be attributed to a specific database. Note that Data_Type Database is only applicable for Searches and Access Denied at the database level and for Investigations and Requests for Full_Content_Databases*. |
A&I_Database |
PR, DR |
Dataset |
A data set. |
Data_Repository |
PR, IR |
Journal |
Textual content published serially as a journal or magazine. |
A&I_Database |
PR, DR, TR, IR |
Multimedia |
Multimedia content such as audio, image, streaming audio, streaming video, and video. |
Multimedia |
PR, DR, IR |
Newspaper_or_Newsletter |
Textual content published serially in a newspaper or newsletter. |
A&I_Database |
PR, DR, TR, IR |
Other |
Content that cannot be classified by any of the other Data_Types. |
A&I_Database |
PR, DR, TR, IR |
Platform |
A content platform that may reflect usage from multiple Data_Types. Note that Data_Type Platform is only applicable for Searches_Platform. |
All Host_Types: |
PR |
Report |
A report. |
A&I_Database |
PR, DR, TR, IR |
Repository_Item |
A generic classification used for items stored in a repository. |
Repository |
PR, IR |
Thesis_or_Dissertation |
A thesis or dissertation. |
A&I_Database |
PR, DR, TR, IR |
*Full_Content_Databases may also use Data_Type Database in the Master Title Report if this report is offered. All other Host_Types MUST report Investigations and Requests either with the title-level Data_Types (e.g. Journal for a journal article or Book for a book, from Host_Type A&I_Database, Aggregated_Full_Content, Discovery_Service, eBook, eBook_Collection and eJournal), or with the item-level Data_Types (e.g. Article for an article or Multimedia for a video from Host_Type Data_Repository, Multimedia, Multimedia_Collection, Repository and Scholarly_Collaboration_Network). These Data_Types MUST be used across all reports required for compliance to ensure a consistent reporting.
Section Types¶
Some scholarly content is accessed in sections. For example, a user may access a chapter or section at a time. Section_Type was introduced to provide a categorization of the transaction based on the type of section accessed. For example, a librarian could use a Title Master Report to see a breakdown of usage by Title and Section_Type. The following table lists the Section_Types defined by COUNTER and the Host_Types and reports to which they apply.
Table 3.q (below): List of Section_Type Values
Section_Type |
Description |
Host_Types |
Reports |
---|---|---|---|
Article |
An article from a compilation, such as a journal, encyclopedia, or reference book. |
Aggregated_Full_Content |
TR |
Book |
A complete book, accessed as a single file. |
Aggregated_Full_Content |
TR |
Chapter |
A chapter from a book. |
Aggregated_Full_Content |
TR |
Other |
Content delivered in sections not otherwise represented on the list. |
Aggregated_Full_Content |
TR |
Section |
A group of chapters or articles. |
Aggregated_Full_Content |
TR |
Metric Types¶
Metric_Types, which represent the nature of activity being counted, can be grouped into the categories of Searches, Investigations, Requests, and Access Denied. The Tables 3.r, 3.s and 3.t (below) list the Metric_Types and the Host_Types and reports they apply to.
Searches
Table 3.r (below): List of Metric_Types for Searches
Metric_Type |
Description |
Host_Types |
Reports |
---|---|---|---|
Searches_Regular |
Number of searches conducted against a user-selected database where results are returned to the user on the host UI. The user is responsible for selecting the databases or set of databases to be searched. This metric only applies to usage tracked at the database level and is not represented at the platform level. |
A&I_Database |
DR |
Searches_Automated |
Searches conducted on the host site or discovery service where results are returned in the host UI and multiple databases are searched without user selection of databases. This metric only applies to usage that is tracked at the database level and is not represented at the platform level. |
A&I_Database |
DR |
Searches_Federated |
Searches conducted by a federated search engine where the search activity is conducted remotely via client-server technology. This metric only applies to usage that is tracked at the database level and is not represented at the platform level. |
A&I_Database |
DR |
Searches_Platform |
Searches conducted by users and captured at the platform level. Each user-initiated search can only be counted once regardless of the number of databases involved in the search. This metric only applies to Platform Reports. |
All Host_Types: |
PR |
*Repositories should provide these Metric_Types if they are able to.
Investigations and Requests of Items and Titles
This group of Metric_Types represents activities where content items were retrieved (Requests) or information about a content item (e.g. an abstract) was examined (Investigations). Any user activity that can be attributed to a content item will be considered an Investigation including downloading or viewing the item. Requests are limited to user activity related to retrieving or viewing the content item itself. The figure below provides a graphical representation of the relationship between Investigations and Requests.
Figure 3.e: The relationship between Investigations and Requests
Totals, Unique Items and Unique Titles
R5 also introduces the concept of unique items and unique titles. The Metric_Types that begin with Total are very similar to the metrics of R4, i.e. if a given article or book or book chapter was accessed multiple times in a user session, the metric would increase by the number of times the content item was accessed (minus any adjustments for double-clicks).
Unique_Item metrics have been introduced in R5 to help eliminate the effect different styles of user interfaces may have on usage counts. With R5, if a single article is accessed multiple times in a given user session, the corresponding Unique_Item metric can only increase by 1 to simply indicate that the content item was accessed in the session. Unique_Item metrics provide comparable usage across journal platforms by reducing the inflationary effect that occurs when an HTML full text automatically displays and the user then accesses the PDF version.
Unique_Title metrics have been introduced in R5 to help normalize eBook metrics. Since eBooks can be downloaded as an entire book in a single PDF or as separate chapters, the counts for R4’s BR1 (book downloads) and BR2 (section downloads) are not comparable. With R5, the book’s Unique_Title metrics are only increased by 1 no matter how many (or how many times) chapters or sections were accessed in a given user session. Unique_Title metrics provide comparable eBook metrics regardless of the nature of the platform and how eBook content was delivered.
The Unique_Title metrics MUST NOT be used for Data_Types other than Book as they are not meaningful for them. If a book contains both OA_Gold and Controlled sections or sections with different YOPs, the usage must be broken down by Access_Type and YOP so that the total counts are consistent between reports including and not including these columns/elements.
Table 3.s (below): List of Metric_Types for Requests and Investigations
Metric_Type |
Description |
Host_Types |
Reports |
---|---|---|---|
Total_Item_Investigations |
Total number of times a content item or information related to a content item was accessed. Double-click filters are applied to these transactions. Examples of content items are articles, book chapters, or multimedia files. |
All Host_Types: |
PR, DR, TR, IR |
Unique_Item_Investigations |
Number of unique content items investigated in a user-session. Examples of content items are articles, book chapters, or multimedia files. |
All Host_Types: |
PR, DR, TR, IR |
Unique_Title_Investigations |
Number of unique titles investigated in a user-session. Examples of titles are journals and books. |
A&I_Database |
PR, DR, TR |
Total_Item_Requests |
Total number of times a content item was requested (i.e. the full text or content was downloaded or viewed). Double-click filters are applied to these transactions. Examples of content items are articles, book chapters, or multimedia files. |
Aggregated_Full_Content |
PR, DR, TR, IR |
Unique_Item_Requests |
Number of unique content items requested in a user-session. Examples of content items are articles, book chapters, or multimedia files. |
Aggregated_Full_Content |
PR, DR, TR, IR |
Unique_Title_Requests |
Number of unique titles requested in a user-session. Examples of titles are journals and books. |
Aggregated_Full_Content |
PR, DR, TR |
*Repositories should provide these Metric_Types if they are able to.
Access Denied
Table 3.t (below): List of Metric_Types for Access Denied
Metric_Type |
Description |
Host_Types |
Reports |
---|---|---|---|
No_License |
Number of times access was denied because the user’s institution did not have a license to the content. Double-click filtering applies to this Metric_Type. Note that if the user is automatically redirected to an abstract, that action will be counted as a No_License and also as an Item_Investigation. |
A&I_Database |
DR, TR, IR |
Limit_Exceeded |
Number of times access was denied because the licensed simultaneous-user limit for the user’s institution was exceeded. Double-click filtering applies to this Metric_Type. |
A&I_Database |
DR, TR, IR |
Access Types¶
In order to track the value of usage for licensed content, librarians want to know how much Open Access or other freely available content was used and how much content was behind a paywall. To accommodate this R5 has introduced an Access_Type attribute with values of Controlled, OA_Gold, OA_Delayed, and Other_Free_To_Read. The table below lists the Access_Types and the Host_Types and reports they apply to. Note that Access_Type relates to access on the platform where the usage occurs: if access to a Gold Open Access article is restricted on a platform (for example because the article is included in an aggregated full-text database available to subscribers only) the Access_Type is Controlled.
Table 3.u (below): List of Access_Type Values
Access_Type |
Description |
Host_Types |
Reports |
---|---|---|---|
Controlled |
At the time of the Request or Investigation the content item was not open (e.g. behind a paywall) because access is restricted to authorized users. Access of content due to a trial subscription/license would be considered Controlled. Platforms providing content that has been made freely available but is not OA_Gold (e.g. free for marketing purposes or because the title offers free access after a year) MUST be tracked as Controlled. |
Aggregated_Full_Content |
TR, IR |
OA_Gold |
At the time of the user Request or Investigation the content item was available under a Gold Open Access license (content that is immediately and permanently available as Open Access because an article processing charge applies or the publication process was sponsored by a library, society, or other organization). Content items may be in hybrid publications or fully Open Access publications. Note that content items offered as Delayed Open Access (open after an embargo period) MUST currently be classified as Controlled, pending the implementation of OA_Delayed. |
Data_Repository |
TR, IR |
OA_Delayed |
*** RESERVED FOR FUTURE USE - DO NOT IMPLEMENT *** At the time of the user Request or Investigation the content item was available as Open Access after an embargo period had expired (Delayed Open Access). Note that author-archived works hosted in institutional repositories where access is restricted from public access for an embargo period will report usage as OA_Delayed for content accessed after the embargo period expires. NOTE: This value is not to be used until its inclusion has been approved by COUNTER and a timeframe for implementation published by COUNTER. |
||
Other_Free_To_Read |
At the time of the transaction the content item was available as free-to-read (no license required) and did not qualify under the OA_Gold Access_Type. NOTE: This value is for institutional repositories only. |
Data_Repository |
IR |
Access Methods¶
In order to track content usage that was accessed for the purpose of text and data mining (TDM) and to keep that usage separate from normal usage, R5 introduces the Access_Method attribute, with values of Regular and TDM. The table below lists the Access_Methods and the Host_Types and reports they apply to.
Table 3.v (below): List of Access_Method Values
Access_Method |
Description |
Host_Types |
Reports |
---|---|---|---|
Regular |
Refers to activities on a platform or content host that represent typical user behaviour. |
All Host_Types: |
All reports: |
TDM |
Content and metadata accessed for the purpose of text and data mining, e.g. through a specific API used for TDM. Note that usage representing TDM activity is to be included in Master Reports only. |
All Host_Types: |
PR, DR, TR, IR |
YOP¶
Analyzing collection usage by the age of the content is also desired. The YOP usage attribute represents the year of publication, and it must be tracked for all Investigations, Requests and Access Denied metrics in the Title and Item Reports. The table below lists the Host_Types and reports the YOP attribute applies to.
Table 3.w (below): YOP Values
YOP |
Description |
Host_Types |
Reports |
---|---|---|---|
yyyy |
The year of publication for the item as a four-digit year. If a content item has a different year of publication for an online version than the print, use the year of publication for the Version of Record. If the year of publication is not known, use a value of 0001. For articles-in-press (not yet assigned to an issue), use the value 9999. |
Aggregated_Full_Content |
TR, IR |
Report Filters and Report Attributes¶
Customized views of the usage data are created by applying report filters and report attributes to the Master Reports. The Standard Views specified by R5 are examples of such views. Report attributes define the columns (elements) and report filters the rows (values) included in the reports. For Master Reports the user can choose from specific sets of filters and attributes depending on the report, while for Standard Views the filters and attributes are pre-set except for an optional Platform filter.
The filters and attributes used to create a report are included in the report header (unless the default value is used, in this case the filter/attribute MUST be omitted), for JSON reports as name/value pairs in the Report_Filters and Report_Attributes elements and for tabular reports encoded in the Metric_Types, Reporting_Period, Report_Filters and Report_Attributes elements (see Section 3.2.1 for the encoding). For the COUNTER_SUSHI API each filter/attribute corresponds to a method parameter with the same name in lower case (see the COUNTER_SUSHI API Specification for details).
The tables below show the attributes and filters and the reports where they (might) appear in the header (excluding Standard Views using the default values).
Table 3.x (below): Report Attributes
Report Attribute |
Description |
Reports |
---|---|---|
Attributes_To_Show |
List of optional columns/elements to include in the report (default: none). See Section 4.1.2, Section 4.2.2, Section 4.3.2 and Section 4.4.2 for permissible values. Note that the component and parent columns/elements cannot be selected individually and MUST NOT be included in the list (see the Include_Component_Details and Include_Parent_Details attributes below). |
PR, DR, TR, IR |
Exclude_Monthly_Details |
Specifies whether to exclude the columns with the monthly usage from the report. Permissible values are False (default) and True. This attribute is only applicable for tabular reports. The corresponding attribute for JSON reports is Granularity. |
PR, DR, TR, IR |
Granularity |
Specifies the granularity of the usage data to include in the report. Permissible values are Month (default) and Totals. This attribute is only applicable to JSON reports, the corresponding attribute for tabular reports is Exclude_Monthly_Details. For Totals each Item_Performance element represents the aggregated usage for the reporting period. Support for Month is REQUIRED for COUNTER compliance, support for Totals is optional. |
PR, DR, TR, IR |
Include_Component_Details |
Specifies whether to include the component columns/elements (see table 3.k) in the report. Permissible values are False (default) and True. |
IR |
Include_Parent_Details |
Specifies whether to include the parent columns/elements (see table 3.j) in the report. Permissible values are False (default) and True. |
IR |
Table 3.y (below): Report Filters
Report Filter |
Description |
Reports |
---|---|---|
Access_Method |
List of Access_Methods for which to include usage (default: all). See Section 4.1.3, Section 4.2.3, Section 4.3.3 and Section 4.4.3 for permissible/pre-set values. |
All reports: |
Access_Type |
List of Access_Types for which to include usage (default: all). See Section 4.3.3 and Section 4.4.3 for permissible/pre-set values. |
TR, IR |
Begin_Date |
Beginning and end of the reporting period. Note that the COUNTER_SUSHI API allows the format yyyy-mm for the method parameters, which must be expanded with the first/last day of the month for the report header. For the tabular reports these filters are included in the Reporting_Period header instead of the Reporting_Filters header for easier reading. |
All reports: |
Database |
Name of a specific database for which usage is being requested (default: all). Support for this filter is optional but recommended for the reporting website. |
DR |
Data_Type |
List of Data_Types for which to include usage (default: all). See Section 4.1.3, Section 4.2.3, Section 4.3.3 and Section 4.4.3 for permissible/pre-set values. |
PR, DR, TR, IR |
Item_Contributor |
Identifier of a specific contributor (author) for which usage is being requested (default: all). Support for this filter is optional but recommended for the reporting website. |
IR |
Item_ID |
Identifier of a specific item for which usage is being requested. Support for this filter is optional but recommended for the reporting website. |
TR, IR |
Metric_Type |
List of Metric_Types for which to include usage (default: all). See Section 4.1.3, Section 4.2.3, Section 4.3.3 and Section 4.4.3 for permissible/pre-set values. For the tabular reports this filter is included in the Metric_Types header instead of the Reporting_Filters header for easier reading. |
All reports: |
Platform |
The Platform filter is only intended in cases where there is a single endpoint for multiple platforms; that is, the same base URL for the COUNTER_SUSHI API is used for multiple platforms and the platform parameter is required for all API calls. In the web interface this would correspond to first selecting one platform and then creating reports only for that platform. |
All reports: |
Section_Type |
List of Section_Types for which to include usage (default: all). See Section 4.3.3 for permissible values. |
TR |
YOP |
Range of years of publication for which to include usage (default: all). For the COUNTER_SUSHI API more complex filter values (list of years and ranges) MUST be supported. |
TR, IR |
Zero Usage¶
Not all content providers or other COUNTER report providers link their COUNTER reporting tool to their subscription database, so R5 reports cannot include zero-usage reporting based on subscription records. Inclusion of zero-usage reporting for everything, including unsubscribed content, could make reports unmanageably large. The need for libraries to identify subscribed titles with zero usage will be addressed by the KBART Automation Working Group initiative.
For tabular reports
Omit any row where the Reporting_Period_Total would be zero.
If the Reporting_Period_Total is not zero, but usage for an included month is zero, set the cell value for that month to 0.
For JSON reports
Omit any Instance element with a Count of zero.
Omit Performance elements that don’t have at least one Instance element.
Omit Report_Items elements that don’t have at least one Performance element.
Missing and Unknown Field Values¶
For tabular reports
If a field value is missing or unknown (e.g. the ISBN for a title doesn’t exist or isn’t known), the field MUST be left blank. For clarity, the field MUST NOT contain values such as “unknown” or “n/a”.
For JSON reports
If the value of a field is missing or unknown and the COUNTER_SUSHI API Specification (see Section 8 below) indicates the field is REQUIRED, the value of the field MUST be expressed as empty as appropriate for the data type.
If the value of a field is missing or unknown and the field is not REQUIRED according to the COUNTER_SUSHI API Specification, the field MUST be omitted from the response.
COUNTER reports¶
Platform Reports¶
Platform Reports provide a summary of activity on a given platform to support the evaluation of platforms and to provide high-level statistical data to support surveys and reporting to funders.
Table 4 (below): Platform Master Report and Standard Views
Report_ID |
Report_Name |
Details |
Host_Types |
---|---|---|---|
PR |
Platform Master Report |
A customizable report summarizing activity across a content provider’s platforms that allows the user to apply filters and select other configuration options. |
All Host_Types: |
PR_P1 |
Platform Usage |
Platform-level usage summarized by Metric_Type. |
All Host_Types: |
*Data repositories may choose to conform to the Code of Practice Release 5 or, alternatively, may wish to work with the Code of Practice for Research Data.
Report Header¶
The table below shows the header details for the Platform Master Report and its Standard Views. For the tabular reports, elements MUST appear in the exact order shown, and spelling, casing, and punctuation of labels (Column A) and fixed data elements such as report names (Column B) MUST match exactly. The JSON version of the report MUST comply with the Report_Header definition in the COUNTER_SUSHI API Specification (see Section 8 below). Entries in the table appearing in italics describe the values to include.
Table 4.a (below): Header for Platform Master Report and Standard Views
Row in Tabular Report |
Label for Tabular Report (Column A) |
Value for Tabular Report (Column B) |
|
---|---|---|---|
PR |
PR_P1 |
||
1 |
Report_Name |
Platform Master Report |
Platform Usage |
2 |
Report_ID |
PR |
PR_P1 |
3 |
Release |
5 |
5 |
4 |
Institution_Name |
Name of the institution the usage is attributed to. |
|
5 |
Institution_ID |
Identifier(s) for the institution in the format of {namespace}:{value}. Leave blank if identifier is not known. Multiple identifiers may be included by separating with semicolon-space (“; ”). |
|
6 |
Metric_Types |
Semicolon-space delimited list of Metric_Types included in the report. |
Searches_Platform; Total_Item_Requests; Unique_Item_Requests; Unique_Title_Requests |
7 |
Report_Filters |
Semicolon-space delimited list of filters applied to the data to generate the report. |
Access_Method=Regular* |
8 |
Report_Attributes |
Semicolon-space delimited list of report attributes applied to the data to generate the report. |
(blank) |
9 |
Exceptions |
Any exceptions that occurred in generating the report, in the format “{Exception Number}: {Exception Description} ({Data})” with multiple exceptions separated by semicolon-space (“; ”). |
|
10 |
Reporting_Period |
Date range requested for the report in the form of “Begin_Date=yyyy-mm-dd; End_Date=yyyy-mm-dd”. The “dd” of the Begin_Date is 01. The “dd” of the End_Date is the last day of the month. |
|
11 |
Created |
Date and time the report was run in RFC3339 date-time format (yyyy-mm-ddThh:mm:ssZ). |
|
12 |
Created_By |
Name of organization or system that generated the report. |
|
13 |
(blank) |
(blank) |
(blank) |
*If a Platform filter is used (see Section 3.3.8 for details), it MUST be included in Report_Filters.
Column Headings/Elements¶
The following elements MUST appear in the tabular report in the order they appear in the table below. For guidance on how these fields appear in the JSON format, refer to the COUNTER_SUSHI API Specification (see Section 8 below). Mandatory (M) elements MUST be included in the report. Optional (O) elements MUST only be included if requested, and if included they MUST be listed in Attributes_To_Show in the Report_Attributes header.
Table 4.b (Below): Column Headings/Elements for Platform Master Report and Standard Views
Field Name (Tabular) |
PR |
PR_P1 |
---|---|---|
Platform |
M |
M |
Data_Type |
O |
|
Access_Method |
O |
|
Metric_Type |
M |
M |
Reporting_Period_Total |
M |
M |
Mmm-yyyy |
M* |
M |
*unless Exclude_Monthly_Details=True is used
Filters and Attributes¶
The following table presents the values that can be chosen for the Platform Master Report and that are pre-set for the Standard Views. If a filter is not included in the request, the default applies. For the Standard Views an empty cell indicates that the filter is not applied.
Table 4.c (below) Filters/Attributes for Platform Master Report and Standard Views
Filter/Attribute |
Filters available (options for Master Report and required for Standard Views) |
|
---|---|---|
PR |
PR_P1 |
|
Data_Type |
One or more or all (default) of the Data_Types applicable to the platform. |
|
Access_Method |
One or all (default) of: |
Regular |
Metric_Type |
One or more or all (default) of: |
Searches_Platform |
Exclude_Monthly_Details |
False (default) or True |
If a filter is applied to a column that doesn’t show on the report, usage for all selected attribute values is summed and the totals are presented in the report.
Database Reports¶
Database Reports provide a summary of activity related to a given database or fixed collection of content that is packaged like a database. These reports provide a means of evaluating the impact a database has for an institution’s users.
Table 4.d (below): Database Master Report and Standard Views
Report_ID |
Report_Name |
Details |
Host_Types |
---|---|---|---|
DR |
Database Master Report |
A customizable report detailing activity by database that allows the user to apply filters and select other configuration options. |
A&I_Database |
DR_D1 |
Database Search and Item Usage |
Reports on key Searches, Investigations and Requests metrics needed to evaluate a database. |
A&I_Database |
DR_D2 |
Database Access Denied |
Reports on Access Denied activity for databases where users were denied access because simultaneous-use licenses were exceeded or their institution did not have a license for the database. |
A&I_Database |
Report Header¶
The table below shows the header details for the Database Master Report and its Standard Views. For the tabular reports, elements MUST appear in the exact order shown, and spelling, casing, and punctuation of labels (Column A) and fixed data elements such as report names (Column B) MUST match exactly. The JSON version of the report MUST comply with the Report_Header definition in the COUNTER_SUSHI API Specification (see Section 8 below). Entries in the table appearing in italics describe the values to include.
Table 4.e (below): Header for Database Master Report and Standard Views
Row in Tabular Report |
Label for Tabular Report (Column A) |
Value for Tabular Report (Column B) |
||
---|---|---|---|---|
DR |
DR_D1 |
DR_D2 |
||
1 |
Report_Name |
Database Master Report |
Database Search and Item Usage |
Database Access Denied |
2 |
Report_ID |
DR |
DR_D1 |
DR_D2 |
3 |
Release |
5 |
5 |
5 |
4 |
Institution_Name |
Name of the institution the usage is attributed to. |
||
5 |
Institution_ID |
Identifier(s) for the institution in the format of {namespace}:{value}. Leave blank if identifier is not known. Multiple identifiers may be included by separating with semicolon-space (“; ”). |
||
6 |
Metric_Types |
Semicolon-space delimited list of Metric_Types included in the report. |
Searches_Automated; Searches_Federated; Searches_Regular; Total_Item_Investigations; Total_Item_Requests |
Limit_Exceeded; No_License |
7 |
Report_Filters |
Semicolon-space delimited list of filters applied to the data to generate the report. |
Access_Method=Regular* |
Access_Method=Regular* |
8 |
Report_Attributes |
Semicolon-space delimited list of report attributes applied to the data to generate the report. |
(blank) |
(blank) |
9 |
Exceptions |
Any exceptions that occurred in generating the report, in the format “{Exception Number}: {Exception Description} ({Data})” with multiple exceptions separated by semicolon-space (“; ”). |
||
10 |
Reporting_Period |
Date range requested for the report in the form of “Begin_Date=yyyy-mm-dd; End_Date=yyyy-mm-dd”. The “dd” of the Begin_Date is 01. The “dd” of the End_Date is the last day of the month. |
||
11 |
Created |
Date and time the report was run in RFC3339 date-time format (yyyy-mm-ddThh:mm:ssZ). |
||
12 |
Created_By |
Name of organization or system that generated the report. |
||
13 |
(blank) |
(blank) |
(blank) |
(blank) |
*If a Platform filter is used (see Section 3.3.8 for details), it MUST be included in Report_Filters.
Column Headings/Elements¶
The following elements MUST appear in the tabular report in the order they appear in the table below. For guidance on how these fields appear in the JSON format, refer to the COUNTER_SUSHI API Specification (see Section 8 below). Mandatory (M) elements MUST be included in the report. Optional (O) elements MUST only be included if requested, and if included they MUST be listed in Attributes_To_Show in the Report_Attributes header.
Table 4.f (below): Column Headings/Elements for Database Master Report and Standard Views
Field Name (Tabular) |
DR |
DR_D1 |
DR_D2 |
---|---|---|---|
Database |
M |
M |
M |
Publisher |
M |
M |
M |
Publisher_ID |
M |
M |
M |
Platform |
M |
M |
M |
Proprietary_ID |
M |
M |
M |
Data_Type |
O |
||
Access_Method |
O |
||
Metric_Type |
M |
M |
M |
Reporting_Period_Total |
M |
M |
M |
Mmm-yyyy |
M* |
M |
M |
*unless Exclude_Monthly_Details=True is used
Filters and Attributes¶
The following table presents the values that can be chosen for the Database Master Report and that are pre-set for the Standard Views. If a filter is not included in the request, the default applies. For the Standard Views an empty cell indicates that the filter is not applied.
Table 4.g (below): Filters/Attributes for Database Master Report and Standard Views
Filter/Attribute |
Filters available (options for Master Report and required for Standard Views) |
||
---|---|---|---|
DR |
DR_D1 |
DR_D2 |
|
Data_Type |
One or more or all (default) of the Data_Types applicable to the platform. |
||
Access_Method |
One or all (default) of: |
Regular |
Regular |
Metric_Type |
One or more or all (default) of: |
Searches_Automated |
Limit_Exceeded |
Exclude_Monthly_Details |
False (default) or True |
If a filter is applied to a column that doesn’t show on the report, usage for all selected attribute values is summed and the totals are presented in the report.
Title Reports¶
Title Reports provide a summary of activity related to content at the title level and provide a means of evaluating the impact a title has for an institution’s patrons.
Table 4.h (below): Title Master Report and Standard Views
Report_ID |
Report_Name |
Details |
Host_Types |
---|---|---|---|
TR |
Title Master Report |
A customizable report detailing activity at the title level (journal, book, etc.) that allows the user to apply filters and select other configuration options. |
Aggregated_Full_Content |
TR_B1 |
Book Requests (Excluding OA_Gold) |
Reports on full-text activity for books, excluding Gold Open Access content, as Total_Item_Requests and Unique_Title_Requests. The Unique_Title_Requests provides comparable usage across book platforms. The Total_Item_Requests shows overall activity; however, numbers between sites will vary significantly based on how the content is delivered (e.g. delivered as a complete book or by chapter). |
Aggregated_Full_Content |
TR_B2 |
Book Access Denied |
Reports on Access Denied activity for books where users were denied access because simultaneous-use licenses were exceeded or their institution did not have a license for the book. |
Aggregated_Full_Content |
TR_B3 |
Book Usage by Access Type |
Reports on book usage showing all applicable Metric_Types broken down by Access_Type. |
Aggregated_Full_Content |
TR_J1 |
Journal Requests (Excluding OA_Gold) |
Reports on usage of journal content, excluding Gold Open Access content, as Total_Item_Requests and Unique_Item_Requests. The Unique_Item_Requests provides comparable usage across journal platforms by reducing the inflationary effect that occurs when an HTML full text automatically displays and the user then accesses the PDF version. The Total_Item_Requests shows overall activity. |
Aggregated_Full_Content |
TR_J2 |
Journal Access Denied |
Reports on Access Denied activity for journal content where users were denied access because simultaneous-use licenses were exceeded or their institution did not have a license for the title. |
Aggregated_Full_Content |
TR_J3 |
Journal Usage by Access Type |
Reports on usage of journal content for all Metric_Types broken down by Access_Type. |
Aggregated_Full_Content |
TR_J4 |
Journal Requests by YOP (Excluding OA_Gold) |
Breaks down the usage of journal content, excluding Gold Open Access content, by year of publication (YOP), providing counts for the Metric_Types Total_Item_Requests and Unique_Item_Requests. Provides the details necessary to analyze usage of content in backfiles or covered by perpetual access agreement. Note that COUNTER reports do not provide access model or perpetual access rights details. |
Aggregated_Full_Content |
Report Header¶
The table below shows the header details for the Title Master Report and its Standard Views. For the tabular reports, elements MUST appear in the exact order shown, and spelling, casing, and punctuation of labels (Column A) and fixed data elements such as report names (Column B) MUST match exactly. The JSON version of the report MUST comply with the Report_Header definition in the COUNTER_SUSHI API Specification (see Section 8 below). Entries in the table appearing in italics describe the values to include.
Table 4.i (below) Header for Title Master Report and Standard Views - Part 1 (for Books)
Row in Tabular Report |
Label for Tabular Report (Column A) |
Value for Tabular Report (Column B) |
|||
---|---|---|---|---|---|
TR |
TR_B1 |
TR_B2 |
TR_B3 |
||
1 |
Report_Name |
Title Master Report |
Book Requests (Excluding OA_Gold) |
Book Access Denied |
Book Usage by Access Type |
2 |
Report_ID |
TR |
TR_B1 |
TR_B2 |
TR_B3 |
3 |
Release |
5 |
5 |
5 |
5 |
4 |
Institution_Name |
Name of the institution the usage is attributed to. |
|||
5 |
Institution_ID |
Identifier(s) for the institution in the format of {namespace}:{value}. Leave blank if identifier is not known. Multiple identifiers may be included by separating with semicolon-space (“; ”). |
|||
6 |
Metric_Types |
Semicolon-space delimited list of Metric_Types included in the report. |
Total_Item_Requests; |
Limit_Exceeded; |
Total_Item_Investigations; |
7 |
Report_Filters |
Semicolon-space delimited list of filters applied to the data to generate the report. |
Data_Type=Book; |
Data_Type=Book; |
Data_Type=Book; |
8 |
Report_Attributes |
Semicolon-space delimited list of report attributes applied to the data to generate the report. |
(blank) |
(blank) |
(blank) |
9 |
Exceptions |
Any exceptions that occurred in generating the report, in the format “{Exception Number}: {Exception Description} ({Data})” with multiple exceptions separated by semicolon-space (“; ”). |
|||
10 |
Reporting_Period |
Date range requested for the report in the form of “Begin_Date=yyyy-mm-dd; End_Date=yyyy-mm-dd”. The “dd” of the Begin_Date is 01. The “dd” of the End_Date is the last day of the month. |
|||
11 |
Created |
Date and time the report was run in RFC3339 date-time format (yyyy-mm-ddThh:mm:ssZ). |
|||
12 |
Created_By |
Name of organization or system that generated the report. |
|||
13 |
(blank) |
(blank) |
(blank) |
(blank) |
(blank) |
*If a Platform filter is used (see Section 3.3.8 for details), it MUST be included in Report_Filters.
Table 4.j (below): Header for Title Master Report and Standard Views - Part 2 (for Journals)
Row in Tabular Report |
Label for Tabular Report (Column A) |
Value for Tabular Report (Column B) |
|||
---|---|---|---|---|---|
TR_J1 |
TR_J2 |
TR_J3 |
TR_J4 |
||
1 |
Report_Name |
Journal Requests (Excluding OA_Gold) |
Journal Access Denied |
Journal Usage by Access Type |
Journal Requests by YOP (Excluding OA_Gold) |
2 |
Report_ID |
TR_J1 |
TR_J2 |
TR_J3 |
TR_J4 |
3 |
Release |
5 |
5 |
5 |
5 |
4 |
Institution_Name |
Name of the institution the usage is attributed to. |
|||
5 |
Institution_ID |
Identifier(s) for the institution in the format of {namespace}:{value}. Leave blank if identifier is not known. Multiple identifiers may be included by separating with semicolon-space (“; ”). |
|||
6 |
Metric_Types |
Total_Item_Requests; |
Limit_Exceeded; |
Total_Item_Investigations; |
Total_Item_Requests; |
7 |
Report_Filters |
Data_Type=Journal; |
Data_Type=Journal; |
Data_Type=Journal; |
Data_Type=Journal; |
8 |
Report_Attributes |
(blank) |
(blank) |
(blank) |
(blank) |
9 |
Exceptions |
Any exceptions that occurred in generating the report, in the format “{Exception Number}: {Exception Description} ({Data})” with multiple exceptions separated by semicolon-space (“; ”). |
|||
10 |
Reporting_Period |
Date range requested for the report in the form of “Begin_Date=yyyy-mm-dd; End_Date=yyyy-mm-dd”. The “dd” of the Begin_Date is 01. The “dd” of the End_Date is the last day of the month. |
|||
11 |
Created |
Date and time the report was run in RFC3339 date-time format (yyyy-mm-ddThh:mm:ssZ). |
|||
12 |
Created_By |
Name of organization or system that generated the report. |
|||
13 |
(blank) |
(blank) |
(blank) |
(blank) |
(blank) |
*If a Platform filter is used (see Section 3.3.8 for details), it MUST be included in Report_Filters.
Column Headings/Elements¶
The following elements MUST appear in the tabular report in the order they appear in the table below. For guidance on how these fields appear in the JSON format, refer to the COUNTER_SUSHI API Specification (see Section 8 below). Mandatory (M) elements MUST be included in the report. Optional (O) elements MUST only be included if requested, and if included they MUST be listed in Attributes_To_Show in the Report_Attributes header.
Table 4.k (below): Column Headings/Elements for Title Master Report and Standard Views
Field Name (Tabular) |
TR |
TR_B1 |
TR_B2 |
TR_B3 |
TR_J1 |
TR_J2 |
TR_J3 |
TR_J4 |
---|---|---|---|---|---|---|---|---|
Title |
M |
M |
M |
M |
M |
M |
M |
M |
Publisher |
M |
M |
M |
M |
M |
M |
M |
M |
Publisher_ID |
M |
M |
M |
M |
M |
M |
M |
M |
Platform |
M |
M |
M |
M |
M |
M |
M |
M |
DOI |
M |
M |
M |
M |
M |
M |
M |
M |
Proprietary_ID |
M |
M |
M |
M |
M |
M |
M |
M |
ISBN |
M |
M |
M |
M |
||||
Print_ISSN |
M |
M |
M |
M |
M |
M |
M |
M |
Online_ISSN |
M |
M |
M |
M |
M |
M |
M |
M |
URI |
M |
M |
M |
M |
M |
M |
M |
M |
Data_Type |
O |
|||||||
Section_Type |
O |
|||||||
YOP |
O |
M |
M |
M |
M |
|||
Access_Type |
O |
M |
M |
|||||
Access_Method |
O |
|||||||
Metric_Type |
M |
M |
M |
M |
M |
M |
M |
M |
Reporting_Period_Total |
M |
M |
M |
M |
M |
M |
M |
M |
Mmm-yyyy |
M* |
M |
M |
M |
M |
M |
M |
M |
*unless Exclude_Monthly_Details=True is used
Filters and Attributes¶
The following table presents the values that can be chosen for the Title Master Report and that are pre-set for the Standard Views. If a filter is not included in the request, the default applies. For the Standard Views an empty cell indicates that the filter is not applied.
Table 4.l (below): Filters/Attributes for Title Master Report and Standard Views - Part 1 (for Books)
Filter/Attribute |
Filters available (options for Master Report and required for Standard Views) |
|||
---|---|---|---|---|
TR |
TR_B1 |
TR_B2 |
TR_B3 |
|
Data_Type |
One or more or all (default) of the Data_Types applicable to the platform. |
Book |
Book |
Book |
Section_Type |
One or more or all (default) of the Section_Types applicable to the platform. |
|||
YOP |
All years (default), a specific year in the format yyyy, or a range of years in the format yyyy-yyyy. Use 0001 for unknown or 9999 for articles in press. Note that the COUNTER_SUSHI API allows the specification of multiple years and ranges separated by the vertical pipe (“|”) character. |
|||
Access_Type |
One or more or all (default) of: |
Controlled |
||
Access_Method |
One or all (default) of: |
Regular |
Regular |
Regular |
Metric_Type |
One or more or all (default) of: |
Total_Item_Requests |
Limit_Exceeded |
Total_Item_Investigations |
Exclude_Monthly_Details |
False (default) or True |
Table 4.m (below): Filters/Attributes for Title Master Report and Standard Views - Part 2 (for Journals)
Filter/Attribute |
Filters available (options for Master Report and required for Standard Views) |
|||
---|---|---|---|---|
TR_J1 |
TR_J2 |
TR_J3 |
TR_J4 |
|
Data_Type |
Journal |
Journal |
Journal |
Journal |
Section_Type |
||||
YOP |
||||
Access_Type |
Controlled |
Controlled |
||
Access_Method |
Regular |
Regular |
Regular |
Regular |
Metric_Type |
Total_Item_Requests |
Limit_Exceeded |
Total_Item_Investigations |
Total_Item_Requests |
Exclude_Monthly_Details |
False (default) or True |
If a filter is applied to a column that doesn’t show on the report, usage for all selected attribute values is summed and the totals are presented in the report.
Item Reports¶
Item Reports provide a summary of activity related to content at the item level and provide a means of evaluating the impact an item has for an institution’s patrons.
Table 4.n (below): Item Master Report and Standard Views
Report_ID |
Report_Name |
Details |
Host_Types |
---|---|---|---|
IR |
Item Master Report |
A granular, customizable report showing activity at the level of the item (article, chapter, media object, etc.) that allows the user to apply filters and select other configuration options. |
Data_Repository* |
IR_A1 |
Journal Article Requests |
Reports on journal article requests at the article level. This report is limited to content with a Data_Type of Article, Parent_Data_Type of Journal, and Metric_Types of Total_Item_Requests and Unique_Item_Requests. This Standard View must be provided only if (a) it is clear for all articles in IR whether they are journal articles or not and (b) the parent item is known for all journal articles. |
Repository |
IR_M1 |
Multimedia Item Requests |
Reports on multimedia requests at the item level. |
Multimedia |
*Data repositories may choose to conform to the Code of Practice Release 5 or, alternatively, may wish to work with the Code of Practice for Research Data.
Report Header¶
The table below shows the header details for the Item Master Report and its Standard Views. For the tabular reports, elements MUST appear in the exact order shown, and spelling, casing and punctuation of labels (Column A) and fixed data elements such as report names (Column B) MUST match exactly. The JSON version of the report MUST comply with the Report_Header definition in the COUNTER_SUSHI API Specification (see Section 8 below). Entries in the table appearing in italics describe the values to include.
Table 4.o (below): Header for Item Master Report and Standard Views
Row in Tabular Report |
Label for Tabular Report (Column A) |
Value for Tabular Report (Column B) |
||
---|---|---|---|---|
IR |
IR_A1 |
IR_M1 |
||
1 |
Report_Name |
Item Master Report |
Journal Article Requests |
Multimedia Item Requests |
2 |
Report_ID |
IR |
IR_A1 |
IR_M1 |
3 |
Release |
5 |
5 |
5 |
4 |
Institution_Name |
Name of the institution the usage is attributed to. |
||
5 |
Institution_ID |
Identifier(s) for the institution in the format of {namespace}:{value}. Leave blank if identifier is not known. Multiple identifiers may be included by separating with semicolon-space (“; ”). |
||
6 |
Metric_Types |
Semicolon-space delimited list of Metric_Types included in the report. |
Total_Item_Requests; Unique_Items_Requests |
Total_Item_Requests |
7 |
Report_Filters |
Semicolon-space delimited list of filters applied to the data to generate the report. |
Data_Type=Article; Parent_Data_Type=Journal; Access_Method=Regular* |
Data_Type=Multimedia; Access_Method=Regular* |
8 |
Report_Attributes |
Semicolon-space delimited list of report attributes applied to the data to generate the report. |
(blank) |
(blank) |
9 |
Exceptions |
Any exceptions that occurred in generating the report, in the format “{Exception Number}: {Exception Description} ({Data})” with multiple exceptions separated by semicolon-space (“; ”). |
||
10 |
Reporting_Period |
Date range requested for the report in the form of “Begin_Date=yyyy-mm-dd; End_Date=yyyy-mm-dd”. The “dd” of the Begin_Date is 01. The “dd” of the End_Date is the last day of the month. |
||
11 |
Created |
Date and time the report was run in RFC3339 date-time format (yyyy-mm-ddThh:mm:ssZ). |
||
12 |
Created_By |
Name of organization or system that generated the report. |
||
13 |
(blank) |
(blank) |
(blank) |
(blank) |
*If a Platform filter is used (see Section 3.3.8 for details), it MUST be included in Report_Filters.
Column Headings/Elements¶
The following elements MUST appear in the tabular report in the order they appear in the table below. For guidance on how these fields appear in the JSON format, refer to the COUNTER_SUSHI API Specification (see Section 8 below). Mandatory (M) elements MUST be included in the report. The optional (O) Parent and Component elements MUST only be included if requested via Include_Parent_Details and Include_Component_Details, respectively (they are not supposed to be selected individually). If they are included then the corresponding Include_Parent_Details=True or Include_Component_Details=True MUST be included in the Report_Attributes header. The other optional (O) elements MUST only be included if requested, and if included they MUST be listed in Attributes_To_Show in the Report_Attributes header.
Table 4.p (below): Column Headings/Elements for Item Master Report and Standard Views
Field Name (Tabular) |
IR |
IR_A1 |
IR_M1 |
---|---|---|---|
Item |
M |
M |
M |
Publisher |
M |
M |
M |
Publisher_ID |
M |
M |
M |
Platform |
M |
M |
M |
Authors |
O |
M |
|
Publication_Date |
O |
M |
|
Article_Version |
O |
M |
|
DOI |
M |
M |
M |
Proprietary_ID |
M |
M |
M |
ISBN |
M |
||
Print_ISSN |
M |
M |
|
Online_ISSN |
M |
M |
|
URI |
M |
M |
M |
Parent_Title |
O |
M |
|
Parent_Authors |
O |
M |
|
Parent_Publication_Date |
O |
||
Parent_Article_Version |
O |
M |
|
Parent_Data_Type |
O |
||
Parent_DOI |
O |
M |
|
Parent_Proprietary_ID |
O |
M |
|
Parent_ISBN |
O |
||
Parent_Print_ISSN |
O |
M |
|
Parent_Online_ISSN |
O |
M |
|
Parent_URI |
O |
M |
|
Component_Title |
O |
||
Component_Authors |
O |
||
Component_Publication_Date |
O |
||
Component_Data_Type |
O |
||
Component_DOI |
O |
||
Component_Proprietary_ID |
O |
||
Component_ISBN |
O |
||
Component_Print_ISSN |
O |
||
Component_Online_ISSN |
O |
||
Component_URI |
O |
||
Data_Type |
O |
||
YOP |
O |
||
Access_Type |
O |
M |
|
Access_Method |
O |
||
Metric_Type |
M |
M |
M |
Reporting_Period_Total |
M |
M |
M |
Mmm-yyyy |
M* |
M |
M |
*unless Exclude_Monthly_Details=True is used
Filters and Attributes¶
The following table presents the values that can be chosen for the Item Master Report and that are pre-set for the Standard Views. If a filter is not included in the request, the default applies. For the Standard Views an empty cell indicates that the filter is not applied.
Table 4.q (below): Filters/Attributes for Item Master Report and Standard Views
Filter/Attribute |
Filters available (options for Master Report and required for Standard Views) |
||
---|---|---|---|
IR |
IR_A1 |
IR_M1 |
|
Data_Type |
One or more or all (default) of the Data_Types applicable to the platform. |
Article |
Multimedia |
YOP |
All years (default), a specific year in the format yyyy, or a range of years in the format yyyy-yyyy. Use 0001 for unknown or 9999 for articles in press. Note that the COUNTER_SUSHI API allows the specification of multiple years and ranges separated by the vertical pipe (“|”) character. |
||
Access_Type |
One or more or all (default) of: |
||
Access_Method |
One or all (default) of: |
Regular |
Regular |
Metric_Type |
One or more or all (default) of: |
Total_Item_Requests |
Total_Item_Requests |
Include_Parent_Details |
False (default) or True |
||
Include_Component_Details |
False (default) or True |
||
Exclude_Monthly_Details |
False (default) or True |
If a filter is applied to a column that doesn’t show on the report, usage for all selected attribute values is summed and the totals are presented in the report.
Delivery of COUNTER Reports¶
Content providers MUST make tabular versions of COUNTER reports available from an administrative/reporting site accessible by members of the institution requesting the report. All COUNTER reports provided by the content provider MUST also be available via the COUNTER_SUSHI API. Delivery requirements are:
Reports MUST be provided in the following formats:
Microsoft Excel file (see Section 3.2 above), or as a Tab Separated Value (TSV) file or other structured text file that can be easily imported into spreadsheet programs without loss or corruption of data. Microsoft Excel files may be offered in addition to text files.
JSON formatted in accordance with the COUNTER_SUSHI API Specification (see Section 8 below).
Each report MUST be delivered as a separate file to facilitate automated processing of usage reports into ERM and usage consolidation systems. For clarity, multiple reports MUST NOT be included in the same Excel file as separate worksheets.
Tabular reports MUST be made available through a website.
The website may be password-controlled.
Email alerts may be sent when data is updated.
The report interface MUST provide filter and configuration options for the Master Reports that apply to the content provider.
The report interface MUST offer all Standard Views the content provider is required to provide, and Standard Views options MUST automatically apply the REQUIRED filter and configuration options and not allow the user to alter the filters or configuration options except for the usage begin and end dates.
The date range fields on the user interface MUST default to the latest month with complete usage. For example, if the current date is 15 May 2019 and April usage has been processed, the begin date would default to 01 April 2019 and the end date would default to 30 April 2019. If the April usage has not yet been processed, the start and end dates would default to 01 March 2019 and 31 March 2019.
Master Reports must include the option to Exclude_Monthly_Details. Item Master Reports must include the options to Include_Parent_Details and Include_Component_Details (see Section 3.3.8 for details).
Reports MUST be provided monthly.
Data MUST be updated within 4 weeks of the end of the reporting period.
Usage MUST be processed for the entire month before any usage for that month can be included in reports. If usage for a given month is not available yet, no usage for that month MUST be returned and an exception 3031 included in the report/response to indicate that usage is not ready for requested dates.
A minimum of the current year plus the prior 24 months of usage data MUST be available, unless the content provider is newly COUNTER compliant.
When content providers become compliant with a new release of the Code of Practice, they begin compiling usage compliant with the new release from the time they become compliant; and they MUST continue to provide the older usage that complies with the previous release(s) of the Code of Practice to fulfil the requirement.
The reports MUST allow the customer the flexibility to specify a date range, in terms of months, within the most recent 24-month period.
Reports MUST be available for harvesting via the COUNTER_SUSHI API within 4 weeks of the end of the reporting period.
Access to Usage for Consortia¶
Separate consortium reports are not provided under R5. Consortium managers must be able to access any R5 report for their members. To facilitate this:
The consortium administrator MUST be able to access the usage statistics for individual consortium member institutions, from a single login, using the same user id and password (i.e. without having to logout and back in for each individual institution).
COUNTER_SUSHI API implementations MUST support the /members path (see Section 10.3 below) to facilitate consortium managers retrieving usage for all members.
Logging Usage¶
Usage data can be generated in a number of ways, and COUNTER does not prescribe which approach should be taken. The two most common approaches are:
Log file analysis, which reads the log files containing the web server records of all its transactions
Page tagging, which uses JavaScript on each page to notify a third-party server when a page is rendered by a web browser.
Other options are to leverage Distributed Usage Logging (DUL) to capture content activity that happens on other websites. Each of these approaches has advantages and disadvantages, summarised below.
Log File Analysis¶
The main advantages of log file analysis over page tagging are:
Web servers normally produce log files, so the raw data are already available. No changes to the website are required.
The data is on the organization’s own servers and is in a standard, rather than a proprietary, format. This makes it easy for an organization to switch programs later, use several different programs, and analyse historical data with a new program.
Log files contain information on visits from search engine spiders. Although these MUST NOT be reported as part of user activity, it is useful information for search engine optimization.
Log files require no additional DNS lookups. Thus, there are no external server calls which can slow page load speeds or result in uncounted page views.
The web server reliably records every transaction it makes, including items such as serving PDF documents and content generated by scripts, and does not rely on the visitor’s browser.
Page Tagging¶
The main advantages of page tagging over log file analysis are:
Counting is activated by opening the page, not requesting it from the server. If a page is cached it will not be counted by the server. Cached pages can account for a significant proportion of page views.
Data is gathered via a component (tag) in the page, usually written in JavaScript although Java can also be used. JQuery and AJAX can also be used in conjunction with a server-side scripting language (such as PHP) to manipulate and store it in a database, allowing complete control over how the data is represented.
The script may have access to additional information on the web client or on the user, not sent in the query.
Page tagging can report on events that do not involve a request to the web server.
Page tagging is available to companies who do not have access to their own web servers.
The page-tagging service manages the process of assigning cookies to visitors; with log file analysis, the server must be configured to do this.
Recently page tagging has become a standard in web analytics.
Log file analysis is almost always performed in-house. Page tagging can be done in-house, but is more often provided as a third-party service. The cost differences between these two models can also be a consideration.
Distributed Usage Logging¶
Distributed Usage Logging (DUL) is an initiative sponsored by Crossref (see DUL Working Group for more information) that provides a framework for publishers to capture usage of DOI-identified content items that occurs on other websites, such as aggregators, repositories, and scholarly information-sharing sites. The premise behind DUL is that publishers can register a DUL usage logging end-point with Crossref, which is then mapped to all of the publisher’s DOIs. A content site, such as a repository, can use a content item’s DOI to look up where the publisher wants a transaction to be logged, then use the standard DUL message structure to log the activity. Using DUL allows a publisher to capture a more complete picture of content usage. The following points cover how DUL may be used with COUNTER statistical reporting:
DUL is not a replacement for log file analysis or page-tagging approaches. DUL can supplement a publisher’s normal usage logging mechanisms, but not replace them.
DUL-captured usage MUST NOT appear on Standard Views.
DUL-captured usage may appear on Master Reports.
DUL-captured usage captured that appears on Master Reports MUST be reported under the platform name where the transaction occurred.
An organization that supplies usage transactions using DUL MUST include their platform ID with each transaction, and their platform MUST be registered with COUNTER.
Reporting usage through DUL is OPTIONAL.
The publisher receiving transactions through DUL is responsible for performing COUNTER processing to eliminate double-clicks, eliminate robot/crawler or other rogue usage, and perform the actions to identify unique items and unique titles.
Publishers that plan to include usage reported through DUL in their COUNTER Master Reports are responsible for ensuring that DUL-reported usage is included in the audit.
Processing Rules for Underlying COUNTER Reporting Data¶
Usage data collected by content providers for the usage reports to be sent to customers should meet the basic requirement that only intended usage is recorded and that all requests that are not intended by the user are removed.
Because the way usage records are generated can differ across platforms, it is impractical to describe all the possible filters and techniques used to clean up the data. This Code of Practice, therefore, specifies only the requirements to be met by the data to be used for building the usage reports.
Return Codes¶
Only successful and valid requests MUST be counted. For web server log files successful requests are those with specific W3C Status Codes, codes (200 and 304). The standards for return codes are defined and maintained by W3C (http://www.w3.org/Protocols/HTTP/HTRESP.html). If key events are used, their definition MUST match the W3C standards. (For more information see The Friendly Guide to Release 5: Technical Notes for Content Providers.)
Double-Click Filtering¶
The intent of double-click filtering is to remove the potential of over-counting which could occur when a user clicks the same link multiple times, typically due to a slow internet connection. Double-click filtering applies to Total_Item_Investigations, Total_Item_Requests, No_License and Limit_Exceeded. See Section 7.3 and Section 7.4 below for information about counting unique items and titles. The double-click filtering rule is as follows:
Double-clicks, i.e. two clicks in succession, on a link by the same user within a 30-second period MUST be counted as one action. For the purposes of COUNTER, the time window for a double-click on any page is set at a maximum of 30 seconds between the first and second mouse clicks. For example, a click at 10:01:00 and a second click at 10:01:29 would be considered a double-click (one action); a click at 10:01:00 and a second click at 10:01:35 would count as two separate single clicks (two actions).
A double-click may be triggered by a mouse-click or by pressing a refresh or back button. When two actions are made for the same URL within 30 seconds the first request MUST be removed and the second retained.
Any additional requests for the same URL within 30 seconds (between clicks) MUST be treated identically: always remove the first and retain the second.
There are different ways to track whether two requests for the same URL are from the same user and session. These options are listed in order of increasing reliability, with Option 4 being the most reliable.
If the user is authenticated only through an IP address, that IP address combined with the browser’s user-agent (logged in the HTTP header) MUST be used to trace double-clicks. Where you have multiple users on a single IP address with the same browser user-agent, this can occasionally lead to separate clicks from different users being logged as a double click from one user. This will only happen if the multiple users are clicking on exactly the same content within a few seconds of each other.
When a session cookie is implemented and logged, the session cookie MUST be used to trace double-clicks.
When a user cookie is available and logged, the user cookie MUST be used to trace double-clicks.
When an individual has logged in with their own profile, their username MUST be used to trace double-clicks.
Counting Unique Items¶
Some COUNTER Metric_Types count the number of unique items that had a certain activity, such as a Unique_Item_Requests or Unique_Item_Investigations.
For the purpose of COUNTER metrics, an item is the typical unit of content being accessed by users, such as articles, book chapters, book sections, whole books (if delivered as a single file), and multimedia content. The item MUST be identified using the unique ID which identifies the work (e.g. chapter or article) regardless of format (e.g. PDF, HTML, or EPUB). If no item-level identifier is available, then use the item name in combination with the identifier of the parent item (i.e. the article title + ISSN of the journal, or chapter name + ISBN of the book).
The rules for calculating the unique item counts are as follows:
If multiple transactions qualifying for the Metric_Type in question represent the same item and occur in the same user-sessions, only one unique activity MUST be counted for that item.
A user session is defined any of the following ways: by a logged session ID + transaction date, by a logged user ID (if users log in with personal accounts) + transaction date + hour of day (day is divided into 24 one-hour slices), by a logged user cookie + transaction date + hour of day, or by a combination of IP address + user agent + transaction date + hour of day.
To allow for simplicity in calculating session IDs, when a session ID is not explicitly tracked, the day will be divided into 24 one-hour slices and a surrogate session ID will be generated by combining the transaction date + hour time slice + one of the following: user ID, cookie ID, or IP address + user agent. For example, consider the following transaction:
Transaction date/time: 2017-06-15 13:35
IP address: 192.1.1.168
User agent: Mozilla/5.0
Generated session ID: 192.1.1.168|Mozilla/5.0|2017-06-15|13
The above replacement for a session ID does not provide an exact analogy to a session. However, statistical studies show that the result of using such a surrogate for a session ID results in unique counts are within 1-2 % of unique counts generated with actual sessions.
Counting Unique Titles¶
Some COUNTER Metric_Types count the number of unique titles that had a certain activity, such as a Unique_Title_Requests or Unique_Title_Investigations.
For the purpose of COUNTER metrics, a title represents the parent work that the item is part of. When the item is a chapter or section, the title is the book. The title MUST be identified using a unique identifier (e.g. an ISBN for a book) regardless of format (e.g. PDF or HTML).
The rules for calculating the unique title counts are as follows:
If multiple transactions qualifying for the Metric_Type in question represent the same title and occur in the same user-session only one unique activity MUST be counted for that title.
A user session is defined any of the following ways: by a logged session ID + transaction date, by a logged user ID (if users log in with personal accounts) + transaction date + hour of day (day is divided into 24 one-hour slices), by a logged user cookie + transaction date + hour of day, or by a combination of IP address + user agent + transaction date + hour of day.
To allow for simplicity in calculating session IDs, when a session ID is not explicitly tracked, the day will be divided into 24 one-hour slices and a surrogate session ID will be generated by combining the transaction date + hour time slice + one of the following: user ID, cookie ID, or IP address + user agent. For example, consider the following transaction:
Transaction date/time: 2017-06-15 13:35
IP address: 192.1.1.168
User agent: Mozilla/5.0
Generated session ID: 192.1.1.168|Mozilla/5.0|2017-06-15|13
The above replacement for a session ID does not provide an exact analogy to a session. However, statistical studies show that the result of using such a surrogate for a session ID results in unique counts are within 1-2 % of unique counts generated with actual sessions.
Attributing Usage when Item Appears in More Than One Database¶
Content providers that offer databases where a given content item (e.g. an article) is included in multiple databases MUST attribute the Investigations and Requests metrics to just one database. The following recommendations may be helpful when choosing when ambiguity arises:
Give priority to databases that the institution has rights to access.
If there is a priority order for databases for search or display within the platform, credit usage to the highest priority database.
Beyond that, use a consistent method of prioritizing database, such as by database ID or name.
If none of the above, pick randomly.
Federated Searches¶
Search activity generated by federated search engines MUST be categorized separately from searches conducted by users on the host platform.
Any searches generated from a federated search system MUST be included in the separate Searches_Federated counts within Database Reports and MUST NOT be included in the Searches_Regular or Searches_Automated counts.
The most common ways to recognize federated search activity are as follows:
A federated search engine may be using its own dedicated IP address, which can be identified and used to separate out the activity.
If the standard HTML interface is being used (e.g. for screen scraping), the user agent within the web log files can be used to identify the activity as coming from a federated search.
For Z39.50 activity, authentication is usually through a username/password combination. Create a unique username/password that just the federated search engine will use.
If an API or XML gateway is available, set up an instance of the gateway that is for the exclusive use of federated search tools. It is RECOMMENDED that you also require the federated search to include an identifying parameter when making requests to the gateway.
COUNTER provides lists of user agents that represent the most common federated search tools. See Appendix G.
Discovery Services and Other Multiple-Database Searches¶
Search activity generated by discovery services and other systems where multiple databases not explicitly selected by the end user are searched simultaneously MUST be counted as Searches_Automated on Database Reports. Such searches MUST be included on the Platform Reports as Searches_Platform, but only as a single search regardless of the number of databases searched.
Example: A user searches a content site where the librarian has pre-selected 20 databases for business and economics searches. For each search conducted by the user:
In the Database Report, each of the 20 databases gets credit for 1 Searches_Automated.
In the Platform Report, Searches_Platform gets credited by 1.
Internet Robots and Crawlers¶
Activity generated by internet robots and crawlers MUST be excluded from all COUNTER usage reports. COUNTER provides a list of user agent values that represent the crawlers and robots that MUST be excluded. Any transaction with a user agent matching one on the list MUST NOT be included in COUNTER reports.
COUNTER maintains the current list of internet robots and crawlers at https://github.com/atmire/COUNTER-Robots
Tools and Features that Enable Bulk Downloading¶
Only genuine, user-driven usage MUST be reported. COUNTER reports MUST NOT include usage that represents requests of full-text content when it is initiated by automatic or semi-automatic bulk download tools where the downloads occur without direct user action.
Products like Quosa or Pubget MUST only be recorded only when the user has clicked on the downloaded full-text article in order to open it.
Full text retrieved by automated processes such as reference manager software or robots (see Section 7.8 above) MUST be excluded.
Usage that occurs through emailing of a list of articles (Requests) or citations (Investigations) that was not as a result of a user explicitly selecting the items for sharing MUST be excluded. Note that the act of a user explicitly sharing an item would be considered an Investigation, and a user downloading and then emailing a PDF would also be considered a Request.
Text and Data Mining¶
Text and data mining (TDM) is a computational process whereby text or datasets are crawled by software that recognizes entities, relationships, and actions. (STM Statement on Text and Data Mining)
TDM does NOT include straightforward information retrieval, straightforward information extraction, abstracting and summarising activity, automated translation, or summarising query-response systems.
A key feature of TDM is the discovery of unknown associations based on categories that will be revealed as a result of computational and linguistic analytical tools.
Principles for reporting usage:
COUNTER does not record TDM itself, as most of this activity takes place after an article has been downloaded. All we can do is track the count of articles downloaded for the purposes of mining.
Usage associated with TDM activity (e.g. articles downloaded for the purpose of TDM) MUST be tracked by assigning an Access_Method of TDM.
Usage associated with TDM activity MUST be reported using the Title, Database, and Platform Master Reports by identifying such usage as Access_Method=TDM.
Usage associated with TDM activity MUST NOT be reported in Standard Views (TR_J1, TR_B1, etc.).
Detecting activity related to TDM:
TDM activity typically requires a prior agreement between the content provider and the individual or organization downloading the content for the purpose of text mining. The content provider can isolate TDM-related traffic using techniques like:
Providing a dedicated end-point that is specifically for TDM data harvesting.
Requiring the use of a special account or profile for TDM data harvesting.
Assigning an API key that would be used for the harvesting.
Registering the IP address of the machine harvesting content.
Harvesting of content for TDM without permission or without using the endpoint or protocol supplied by the content provider MUST be treated as robot or crawler traffic and MUST be excluded from all COUNTER reports.
SUSHI for Automated Report Harvesting¶
Content providers MUST support automatic harvesting of COUNTER reports via the COUNTER_SUSHI API. The specification for the RESTful COUNTER_SUSHI API is maintained by COUNTER on SwaggerHub:
https://app.swaggerhub.com/apis/COUNTER
The Swagger files are a comprehensive reference version that contains a detailed description of the entire COUNTER_SUSHI API. It is expected that reporting services will use only the parts relevant to them, or make local tailored copies relevant to their particular circumstances, for example by removing methods detailing reports they don’t support.
COUNTER_SUSHI API Paths to Support¶
The following paths (methods) MUST be supported:
Path |
Description |
---|---|
GET /status |
Returns the current status of the COUNTER_SUSHI API service. This path returns a message that includes the operating status of the API, the URL to the service’s entry in the Register of COUNTER Compliant Content Providers, and an array of service alerts (if any). |
GET /reports |
Returns a list of reports supported by the COUNTER_SUSHI API service. The response includes an array of reports, including the report identifier, the release number, the report name, a description, and (optional but recommended for custom reports) the path to use when requesting the report. |
GET /reports/{Report_ID in lower case} |
Each supported report has its own path, e.g. GET /reports/tr_b1 for “Book Requests (Excluding OA_Gold)”, GET /reports/tr_j1 for “Journal Requests (Excluding OA_Gold)”. |
GET /members |
Returns the list of consortium members or sites for multi-site customers. The response includes an array of customer account information, including for each the customer ID (to use when requesting COUNTER reports), the requestor ID (to use when requesting COUNTER reports), the customer account name, and additional identifiers for the organization (if any). Note that if the customer ID specified in the parameter for the /members path is not a multi-site organization, then the response will simply return the details for that customer. |
Authentication and Security for COUNTER_SUSHI API¶
The COUNTER_SUSHI API MUST be implemented using TLS (HTTPS).
The API MUST be secured using one or more of the following methods:
Combination of customer ID and requestor ID
IP address of the SUSHI client
API key assigned to the organization harvesting the usage
Non-standard techniques for authentication (techniques not specified in the COUNTER_SUSHI API specification) MUST NOT be used.
If IP address authentication is implemented, it MUST allow the same SUSHI client (a single IP address) to harvest usage for multiple customer accounts (e.g. hosted ERM services).
Report Filters and Report Attributes¶
The COUNTER_SUSHI API specification allows report responses to be customized to the caller’s needs using report filters and report attributes. For Standard Views, these filters and attributes are implicit. For the Master Reports, the filters and attributes will be explicitly included as parameters on the COUNTER_SUSHI request.
Refer to Section 3.3.8 and the COUNTER_SUSHI API Specification for the list of filters and attributes supported by the various COUNTER reports.
Errors and Exceptions¶
Implementations of the COUNTER_SUSHI API MUST comply with the warnings, exceptions and errors described in Appendix F.
Audit¶
An important feature of the COUNTER Code of Practice is that compliant content providers (including third-party services providing stats on behalf of content providers) MUST be independently audited on an annual basis in order to maintain their COUNTER-compliant status. To facilitate this, a set of auditing standards and procedures has been published in Appendix E of this Code of Practice. COUNTER has tried to meet the need of customers for credible usage statistics without placing an undue administrative or financial burden on content providers. For this reason, audits will be conducted online in accordance with the program included in the auditing standards and procedures (Appendix E).
The independent audit is REQUIRED within six months of a content provider’s first self-certifying their compliance with the COUNTER Code of Practice, and annually thereafter. COUNTER will recognize an audit carried out by any Certified Public Accountant (CPA) in the USA, by any Chartered Accountant (CA) in the UK, or by their equivalent in other countries. Alternatively, the audit may be done by COUNTER-approved auditor, such as ABC, which is not a CA or a CPA. (Contact COUNTER for a list of approved auditors.)
The Audit Process¶
COUNTER-compliant content providers are required to schedule an audit in time for the audit due date listed on their entry on the COUNTER website (https://www.projectcounter.org/about/register/).
At least one month before the audit due date, content providers MUST advise COUNTER of the name of the organization that will carry out the audit. Any queries about the audit process may be raised at this time.
Irrespective of the auditor selected, the audit MUST adhere to the requirements and use the program specified in Appendix E of this Code of Practice. The audit is carried out in three stages. Stage 1 covers the format and structure of the usage reports. In Stage 2 the auditor tests the integrity of the reported usage statistics by creating their own usage on a sample basis and subsequently reviewing the usage reports for this activity. In Stage 3 the auditor checks that the delivery of the usage reports adheres to the COUNTER requirements.
Upon completion of the audit, the auditor is REQUIRED to send a signed copy of the audit report to the COUNTER office (compliance@counterusage.org). On receipt of the successful audit report, the content provider will be sent a dated COUNTER logo, which they can display on their website. For example:
The dated logo MUST link to the content provider’s entry on the COUNTER website.
Failure to complete a successful audit by the due date may result in COUNTER removing that content provider from the list of compliant content providers on the COUNTER website.
Note that COUNTER has provided a COUNTER Report Validation Tool to allow content providers and auditors to quickly perform compliance checks related to format. It is highly RECOMMENDED for content providers to use this tool to check their reports and COUNTER_SUSHI API implementation before they begin the audit.
Categories of Audit Result¶
There are three categories of audit result, as follows:
Pass - No further action is required by the content provider as a result of the audit. In some cases, the auditor may add observations to the audit report, which are intended to help the content provider improve its COUNTER usage reports but are not required for compliance.
Qualified Pass - The content provider has passed the audit, but the auditor raises a minor issue requiring further action to maintain COUNTER-compliant status. A minor issue does not affect the reported figures but MUST be resolved within three months of the audit to maintain COUNTER-compliant status. An example of a minor issue is where a report format does not conform to the COUNTER specifications.
Fail - The auditor has identified an issue that MUST be resolved within three months for the content provider to maintain COUNTER-compliant status.
Timetable and Procedure¶
R5 of the COUNTER Code of Practice, published in July 2017, will become the only valid version of the Code of Practice from 1 January 2019.
Applications for COUNTER-compliant status
A register of content providers and their platforms for which COUNTER-compliant usage reports are available is maintained by COUNTER and posted on the COUNTER website - https://www.projectcounter.org/about/register/
Content providers may apply to the Project Director (compliance@counterusage.org) for their products to be included on the register. Content providers will have to provide proof of initial compliance by including the results of COUNTER Report Validation Tool tests showing compliance for each of its reports, including testing both the upload of the tabular reports and SUSHI harvesting of the same report. Upon receipt of the application and proof of compliance, content providers MUST allow at least one of the COUNTER library test sites to evaluate their usage reports.
When the usage reports are deemed to comply with the COUNTER Code of Practice, the content provider will be asked to sign a Declaration of COUNTER Compliance (Appendix C), after which the content provider and its platforms will be added to the register.
Within six months a report from an independent auditor confirming that the usage reports and data are indeed COUNTER-compliant will be required. See Appendix E for a description of the auditing program.
The signed declarations MUST be sent to the COUNTER office (compliance@counterusage.org) as email attachments.
Right to Use COUNTER-Compliance Logo and Designation¶
Content providers who have had their application accepted by COUNTER but have not yet completed a successful audit may use the designation “COUNTER Compliance Pending”. Only content providers that have passed the audit can use the designation “COUNTER Compliant” and the dated COUNTER logo.
Content providers who have not applied for compliance or whose compliance has lapsed MUST NOT claim or imply COUNTER compliance on their site, in licenses, or in their marketing and do not have the rights to use the COUNTER name or logo.
Other Compliance Topics¶
Content providers seeking COUNTER compliance are expected to comply with the following.
Including COUNTER in License Agreements¶
To encourage widespread implementation of the COUNTER Code of Practice, customers are urged to include the following clause in their license agreements with content providers:
‘The licensor confirms to the licensee that usage statistics covering the online usage of the products covered by this license will be provided. The licensor further confirms that such usage statistics will adhere to the specifications of the COUNTER Code of Practice, including data elements collected and their definitions; data processing guidelines; usage report content, format, frequency and delivery method’.
Confidentiality of Usage Data¶
Privacy and User Confidentiality¶
Statistical reports or data that reveal information about individual users will not be released or sold by content providers without the permission of that individual user, the consortium, and its member institutions (ICOLC Guidelines, October 2006).
It is the responsibility of the Content Providers to be aware of and ensure that they meet security and privacy requirements, including GDPR and other standards and requirements that may be applicable.
Institutional or Consortia Confidentiality¶
Content providers do not have the right to release or sell statistical usage information about specific institutions or the consortium without permission, except to the consortium administrators and other member libraries, and to the original content provider and copyright holder of the content. Use of institutional or consortium data as part of an aggregate grouping of similar institutions for purposes of comparison does not require prior permission as long as specific institutions or consortia are not identifiable. When required by contractual agreements, content providers, such as aggregators, may furnish institutional use data to the original content providers. (Based on ICOLC Guidelines, October 2006).
COUNTER Reporting for Consortia¶
Consortia license content for their members and consortium administrators need access to COUNTER statistics that show how each member has used the licensed resources.
Access to SUSHI Credentials for Member Sites¶
Content providers MUST support the /members COUNTER_SUSHI API path to provide the consortium with the list of their members on the platform and the SUSHI credentials for each member. This will enable tools to be created to efficiently retrieve member usage and create separate or consolidated reporting.
Privacy and Confidentiality¶
COUNTER acknowledges that some organizations treat their usage data as sensitive and private information. Content providers may include the option for consortium members to opt-out of consortium reporting. COUNTER recommends the default setting for an organization is to opt-in to consortium reporting.
Content to Report Usage On¶
When a COUNTER report is harvested by a consortium administrator, a content provider may choose to limit member usage to include only content acquired through the consortium. Note that when such a limitation is in place the resulting report may differ from the member-sites own version of the report. Since not all content providers can provide such limits, the consortium will be responsible for ensuring usage is filtered to the content they license for members.
When the content provider chooses to limit member usage to only content acquired through the consortium, they MUST include a message to this effect in the Notes element in their implementation of the /members path in the COUNTER_SUSHI API (see Section 8 above).
Detailed versus Summary Reports¶
A content provider MUST offer the option to provide consortium-level summary of usage for the consortium. For a consortium summary report (usage for all members of the consortium rolled up at the consortia level), COUNTER acknowledges that the totals on the summary report may differ from the sum of the totals on individual member reports for the same items if an authentication method used identifies to multiple member sites and usage it attributed to each such site (e.g. overlapping IP ranges).
SUSHI Service Limits¶
The content provider MUST NOT place limits on the SUSHI service (such as requests per day or amount of data transferred) that would prevent a consortium from retrieving reports for all its members.
Extending the Code of Practice¶
COUNTER recognises that some content providers may want to provide customized versions of COUNTER reports to address reporting needs specific to their platform and content. This section describes a method of extending the Code of Practice that avoids creating conflicting custom implementations between content providers.
Platform as a Namespace¶
Content providers and other organizations providing COUNTER reports wishing to create custom reports or introduce custom elements or element-values can do so by using their platform identifier (platform ID) as a namespace. For example, if EBSCO wanted to create a customized version of the “Journal Requests (Excluding OA_Gold)” Standard View for their link resolver product that includes a new Metric_Type for counting link-outs, they could do this by naming the report “ebscohost:TR_J1” and creating a new Metric_Type of “ebscohost:Total_Linkouts”.
The namespace MUST only contain ASCII characters (a-z, A-Z, 0-9). No spaces or punctuation is allowed.
COUNTER will assign the platform ID when adding the platform to their Registry of Compliance (content providers can suggest a value to be used for their platform ID). Other organizations providing COUNTER reports, such as consortia or ERM providers, may contact COUNTER to register a namespace if they desire to create extensions and customizations. COUNTER will maintain a list of approved namespaces.
Creating Customized COUNTER Reports¶
Customized versions of COUNTER reports can be created as long as the general layout for COUNTER reports is followed. New reports MUST be given an identifier and a name in the format of {namespace}:{report ID} and {namespace}:{report name}. An example of a custom report could be:
Report_ID |
Report_Name |
---|---|
ebscohost:LR1 |
ebscohost:Link Out Report 1 |
Creating New Elements/Columns Headings¶
New elements/column headings can be added to the Master Reports (PR, DR, TR, IR). The element name MUST take the form of {namespace}:{element name}. An example of a custom elements/column heading could be: isi:Impact_Factor
Creating New Values for Enumerated Elements and Attributes¶
Several report elements and attributes in COUNTER reports include a controlled list of possible values. On occasion, a content provider may want to introduce additional values that better reflects their content and platform. The element value lists can be extended by including additional values in the form of {namespace}:{element value}. An example of a custom Metric_Type could be ebscohost:Total_Linkouts. The following is the list of elements that can be extended in this manner:
Data_Type
Section_Type
Access_Type
Access_Method
Metric_Type
Note that values for identifier fields (Institution_ID, Publisher_ID, etc.) MUST also include the namespace for these identifiers. For proprietary identifiers that are platform-specific, the platform ID should be used as the namespace.
Reserved Values Available for Extending Reports¶
This Code of Practice recognizes that there are some common extensions that content providers might want to include in Master Reports or when creating custom reports; therefore, the following element names and element values have been reserved for this common use:
Reserved Name |
Description |
Use-case |
---|---|---|
Customer_ID |
An element/column heading for the body of the report. |
When a report contains usage for multiple organizations. |
Institution_Name |
An element/column heading for the body of the report. |
When a report contains usage for multiple organizations. |
Format |
An element name used to identify the format of the content. Reserved Values include:
|
By tracking the format, the content provider can use R5 usage logs to generate R4 usage reports during the transition period. |
Restrictions in Using Customized Elements and Values¶
Report extensions can be used in custom reports as well as in Master Reports. If extensions are introduced to a Master Report, it MUST be possible for a user to exclude extended elements and values from the report if desired.
Extensions MUST NOT be used with Standard Views.
Continuous Maintenance¶
With R5, the COUNTER Code of Practice will operate under a continuous maintenance procedure to allow incremental changes to be made to the Code of Practice without creating a completely new release. This section describes those procedures.
Instructions for Submittal of Proposed Change¶
Changes and updates to the COUNTER Code of Practice can be submitted by anyone. Submissions MUST be made via email and directed to compliance@counterusage.org. Each idea for submission MUST include:
Submitter contact information:
Name
Email
Phone
Affiliation
Description of the enhancement/adjustment (include the section and paragraph number of the current Code of Practice if applicable)
Reason for the change (use case and/or goals to be accomplished)
Any relevant attachments
Review of Change Requests¶
All submissions received will be acknowledged and forwarded to the COUNTER Executive Committee for consideration within 30 days of receipt.
Resolution of Proposed Changes¶
Responding to Submissions¶
The COUNTER Executive Committee (EC) will review submissions and provide a response within 90 days of receipt (to allow discussion at a regularly scheduled EC meeting). The EC will respond to every submission with one of the following, providing clarity when needed:
Proposed change accepted without modification
Proposed change accepted with modification
Proposed change accepted for further study
Proposed change rejected
If further study is needed, the EC may convene a separate working group to study the proposal and make recommendations related to the suggested comments.
Approval of Changes¶
Changes that are substantive in nature (i.e. would require changes in how reports are generated or consumed) will be presented to COUNTER membership for comments for a period of at least 45 calendar days. All member comments MUST be considered and responded to by the EC or the designated working group.
After the comment period, changes to the COUNTER Code of Practice MUST be voted upon by the COUNTER Executive Committee and approved by committee majority. EC Members can respond to a ballot by voting Yes, No or Abstain. For clarity, the number of affirmative votes MUST be greater than 50% of the total number of EC members minus abstentions (a non-vote is considered a “No” vote.)
Communication of Changes¶
COUNTER will inform the COUNTER membership about upcoming changes to the COUNTER Code of Practice through email and on the COUNTER website. Additionally, proposed and pending changes will be published on the Usus website and through posting on listservs that discuss usage topics.
Version and Change Control¶
Each update to the COUNTER Code of practice will generate a new version number (i.e. the initial release of “R5” will be designated as version 5.0. A non-substantive change (fixing typographical errors) would increment the version by .0.1, creating version 5.0.1. A substantive change (requiring changes in implementation of the Code of Practice) would increment the version by .1, creating version 5.1.
All changes included in each release will be included in the Change History section of the Code of Practice. The prior release will be archived as a PDF document and access to that release provided via the COUNTER website.
Implementation Schedule¶
Changes to the COUNTER Code of Practice may be non-substantive or substantive. A non-substantive change may be a clarification or correction of typographical errors that does not affect how the Code of Practice is implemented. A substantive change is one that would affect the implementation of the COUNTER Code of Practice. Examples of substantive changes are adding a new metric type or report, changing the requirement for including a data element from “may” to “MUST”, or changing processing rules.
Non-substantive changes can become effective immediately upon publication of the new version of the Code of Practice.
Substantive changes become effective for a given content provider within 12 months of publication of the new release or with the next audit, whichever date is later.
Substantive changes will be clearly marked in the change log in Appendix B to ensure they can be easily identified.
All other requirements of the Code of Practice will remain in effect during the implementation period for changes brought about by a new release.
Transitioning from Previous Releases or to New Reporting Services¶
A requirement of the COUNTER Code of Practice is that content providers offer libraries access to the current year plus the prior 24 months of usage or from the date they first became compliant, whichever is later. This requirement must continue to be met even when a provider may be transitioning to a new release of the COUNTER Code of Practice or if they are moving to a new reporting service.
Transitioning to a New Reporting Service¶
When a content provider implements a new reporting service, underlying logging system, or approach, they:
MUST continue to meet the requirement to offer valid COUNTER reports for the current year plus the prior 24 months (or from the date they first became compliant, whichever is later) via a web interface and via a SUSHI server.
MUST support COUNTER reports that may include a range of months that span the transition period. If the new reporting service was deployed in August of 2017, a customer could request a report for January-December 2017 and receive a single report.
When it is not practical to support a single report with date ranges that span the transition period, the content provider MUST perform the transition on the first day of a month. If the new reporting service was deployed in August 2017, a customer wanting January-December 2017 usage would request January-July 2017 from the previous reporting service and August-December 2017 from the new reporting service. For clarity, a provider MUST NOT perform the transition mid-month such that the customer is required to run reports on both the old and new reporting services for the same month and merge and sum the results to obtain actual monthly usage.
Transitioning to a New Code of Practice¶
New releases of the COUNTER Code of Practice will typically be assigned an effective date after which a content provider must be compliant. In such cases, a content provider may choose to implement the new release before the effective date. New releases of the COUNTER Code of Practice may come with specific transition instructions, but, in general, content providers:
May implement the new release prior to the effective date of the new release.
Are not required to release reports for usage transacted prior to the implementation date; however, they may choose to do so at their discretion.
MUST continue to meet the requirement to offer valid COUNTER reports for the current year plus the prior 24 months (or from the date they first became compliant, whichever is later) via a web interface and via a SUSHI server.
MUST provide a means for customers to receive prior-release reports for usage transacted from the content provider’s transition date through to 3 full months after the effective date of the new release. For clarity, if a new release becomes effective 1 February 2019 and a content provider implements the new release 1 October 2018, a customer must be able to obtain the prior-release usage reports for usage prior to the transition period as well as for usage that occurred in October 2018 to April 2019. A content provider can meet this requirement in one of the following ways:
Maintain two reporting systems such that usage is logged to the old and new reporting services and customers can access current-release reports on the new reporting service and prior-release reports on the old reporting service.
Support the prior-release reports on the new reporting service. This may involve using the metrics from the new release to produce reports formatted to the prior release; or it may involve logging additional data to the new reporting service such that the prior release reports can continue to be supported.
If the new release offers metrics compatible with the prior release, offer only new release reports provided customers have access to freely available tools that will automatically generate the required prior release report from an equivalent new release report and meet the requirement that these reports are available in tabular form and via the COUNTER_SUSHI API.
May choose to support COUNTER reports that include a range of months that span the transition period. E.g. if the new reporting service compliant with a new COUNTER release was deployed in October of 2018, a customer could request a report for January-December 2018 and receive a single report in either the new release or the previous release (see previous point on the transition period).
When it is not practical to support a single report with date ranges that span the transition period, the content provider MUST perform the transition on the first day of a month. E.g. if the new reporting service was deployed in October 2018, a customer wanting January-December 2018 usage would request January-September 2018 from the previous reporting service and October-December 2018 from the new reporting service. For clarity, a provider MUST NOT perform the transition mid-month such that the customer is required to run reports on both the old and new reporting services for the same month and merge and sum the results to obtain actual monthly usage.
Transitioning from COUNTER R4 to R5¶
The transition from R4 to R5 meets the general requirements outlined in Section 13.2.
Content providers MUST be compliant by February 2019 for delivery of R5 reports starting with January 2019 usage.
Content providers may choose to release their R5 compliant reporting service before February 2019.
A content provider’s customers MUST be able to obtain R4-compliant reports for that content provider from the time the content provider’s R5 reporting service was released through to April 2019 (providing access to March 2019 usage). A content provider may provide access to R4 reports beyond April 2019 at their discretion.
Content providers may choose to meet the requirement to provide R4 report based on R5 metrics. The following R4 reports must be supported (when applicable to the platform): BR1, BR2, BR3, DB1, DB2, JR1, JR2, JR5, and PR1. The following table presents the equivalent R4 metric types and R5 Metric_Types and filters by report.
R4 Report |
R4 metric |
R5 equivalent |
---|---|---|
BR1 |
Full-text requests (at the book level) |
Unique_Title_Requests AND Data_Type=Book AND Section_Type=Book |
BR2 |
Full-text requests (at the chapter/section level) |
Total_Item_Requests AND Data_Type=Book AND Section_Type=Chapter|Section |
BR3 |
Access denied - concurrent/simultaneous user limit exceeded |
Limit_Exceeded AND Data_Type=Book |
Access denied - content item not licensed |
No_License AND Data_Type=Book |
|
DB1 |
Regular searches |
Searches_Regular |
Searches - federated and automated |
SUM (Searches_Automated, Searches_Federated) |
|
Result clicks |
Total_Item_Investigations attributed to the database |
|
Record views |
Total_Item_Investigations attributed to the database. (Note that resulting result click and record view counts will be the same. Librarians should use one or the other and not add them up.) |
|
DB2 |
Access denied - concurrent/simultaneous user limit exceeded |
Limit_Exceeded AND Data_Type=Database |
Access denied - content item not license |
No_License AND Data_Type=Database |
|
JR1 |
Full-text requests |
Total_Item_Requests AND Data_Type=Journal |
HTML requests |
Leave blank unless format of HTML and PDF are also logged in which case: Total_Item_Requests AND Data_Type=Journal AND Format=HTML |
|
PDF requests |
Leave blank unless format of HTML and PDF are also logged in which case: Total_Item_Requests AND Data_Type=Journal AND Format=PDF |
|
JR2 |
Access denied - concurrent/simultaneous user limit exceeded |
Limit_Exceeded AND Data_Type=Journal |
Access denied—content item not licensed |
No_License AND Data_Type=Journal |
|
JR5 |
Full-text requests (by year of publications) |
Total_Item_Requests AND Data_Type=Journal, pivot on YOP |
PR1 |
Regular searches |
Searches_Platform |
Searches - federated and automated |
Leave blank (Searches performed on the platform via federated and automated searching are included in Searches_Platform). |
|
Result clicks |
SUM (Total_Item_Investigations attributed to the databases) |
|
Record views |
SUM (Total_Item_Investigations attributed to the databases). (Note that resulting result click and record view counts will be the same. Librarians should use one or the other and not add them up.) |
Change History¶
Release |
Description of Change |
Substantive? |
Date approved |
Date for compliance |
---|---|---|---|---|
New Code of Practice to replace Release 4. |
Yes |
2017-07-01 |
2019-02-28 (with support for January 2019 usage) |
|
Amendments, corrections and clarifications based on feedback and questions from the community. |
Yes |
2018-12-10 |
2019-02-28 (with support for January 2019 usage) |
A detailed description of the changes is provided in Appendix B.
Appendices¶
Appendix A: Glossary of Terms¶
Term |
Definition |
Examples/formats Definition |
---|---|---|
A&I database |
A non-full-text database that typically contains article metadata, abstracts, and subject classifications. Used by researchers to locate publications relevant to their research. |
PubMed, PsycInfo |
A&I service |
A vendor or website that provides A&I databases. |
American Psychological Association (APA) |
Abstract |
A short summary of an article or content item. A detailed view of article metadata that includes the summary but not the full text. Accessing the abstract/detailed view falls into the usage category of Investigations. |
|
Abstract and Index Database Host |
See A&I service. |
APA, EBSCOhost, ProQuest |
Accepted manuscript |
The version of a journal article that has been accepted for publication in a journal. This version includes any pre-publication revisions, but it does not include any formatting or copyediting changes or corrections. |
|
Access Denied |
User is denied access to a content item because their institution lacks a proper license or because simultaneous user limits specified in the license have been exceeded. |
|
Access Denied: Limit_Exceeded |
User is denied access to a content item because the simultaneous user limit for their institution’s license would be exceeded. |
|
Access Denied: No_License |
User is denied access to a content item because the user or the user’s institution does not have access rights under an agreement with the content provider. |
|
Access_Method |
A COUNTER attribute indicating whether the usage related to investigations and requests was generated by a human user browsing and searching a website (Regular) or by Text and Data Mining processes (TDM). |
Regular, TDM |
Access_Type |
A COUNTER attribute used to report on the nature of access control restrictions, if any, placed on the content item at the time when the content item was accessed. |
Controlled, OA_Gold_APC, OA_Gold_Non_APC, OA_Delayed, Other_Free_to_Read |
Aggregated_Full_Content |
A COUNTER Host_Type for content providers who license full-text articles and possibly non-textual content (beyond bibliographic information). |
|
Aggregated full content database |
A database that contains full-text articles and possibly non-textual content (beyond bibliographic information) and that is sold as a self-contained/pre-set grouping of data. |
Academic Search Complete |
Aggregated full content database host |
A content host that provides access to aggregated full content databases. |
EBSCOhost, ProQuest |
Aggregator |
A type of content provider that hosts content from multiple publishers, delivers content direct to customers, and is paid for this service by customers. |
EBSCOhost, Gale, Lexis Nexis, ProQuest |
ALPSP |
The Association of Learned and Professional Society Publishers is an international trade association of non-profit publishers. |
|
APC |
See Article processing charge. |
|
API |
Application Programming Interface. |
|
Article |
An item of original written work published in a journal, other serial publication, or in a book. An article is complete, but usually cites other relevant published works in its list of references, if it has one. A COUNTER Data_Type. |
|
Article header |
See Metadata. |
|
Article processing charges |
An article processing charge (APC), also known as a publication fee, is a fee which is sometimes charged to authors to make a work available open access in either an open access journal or hybrid journal. …They are the most common funding method for professionally published open access articles. [Wikipedia] |
|
Article_Version |
Defined by ALPSP and NISO as a classification of the version of an Article as it goes through its publication life-cycle. An element on a COUNTER Expanded Item report that identifies the version of the Article being accessed. Typically COUNTER usage reporting only reflects usage of the following article versions (of the 7 versions defined by the ALPSP/NISO JAV Technical Working Group):
|
AM, VoR, CVoR, EVoR |
Articles in press |
Full-text articles that have been accepted for publication in a journal and have been made available online to customers and that will be assigned a publication date of the current year or a future year. |
|
Attribute |
Used to specify a Report Filter when customizing a Master Report. |
|
Author(s) |
The person/people who wrote/created the items whose usage is being reported. |
|
Automated search |
A search from a discovery layer or similar technology where multiple Databases are searched simultaneously with a single query from the user interface. The end user is not responsible for selecting which Databases are being searched. Usage of this nature is reported as Searches_Automated. A Search run repeatedly (i.e. daily or weekly) by a script or automated process. Usage of this nature must not be included in COUNTER reports. |
|
Automated search agent |
A script or automated process that runs a search repeatedly, usually at pre-set intervals such as daily or weekly. |
|
Backfile |
See Archive. |
Oxford Journals Archive |
Begin_Date |
The first date in the range for the usage represented in a COUNTER report. |
|
Book |
A non-serial publication of any length available in print (in hard or soft covers or in loose-leaf format) or in electronic format. A COUNTER Data Type. |
|
Book Access Denied |
Access Denied activity for books, where users were denied access because simultaneous-user licenses were exceeded, or their institution did not have a license for the book. |
|
Book chapter |
A subdivision of a book or of some categories of reference work; usually numbered and titled. |
|
Book Request |
Book content items retrieved. |
|
Book Section |
See Section_Type. |
|
Book Segment |
See Section_Type. |
|
Bulk download |
A single event where multiple content items are downloaded to the user’s computer. |
|
Cache |
Automated system that collects items from remote servers to serve closer and more efficiently to a given population of users. Often populated by robots or modern browsers. Note: Publishers take steps to prevent local caching of their content, i.e. including appropriate headers on their site to restrict caching. |
|
Central Index |
Also known as a Discovery Index. A collection of locally-hosted, consistently indexed metadata and content harvested from multiple external metadata and content sources, frequently including a library’s catalog and repository metadata, and usually representing a significant portion of the library’s collection. |
|
Certified Public Accountant (CPA) |
An accounting designation granted to accounting professionals in the United States. |
|
Chapter |
A subdivision of a book or of some categories of reference work, usually numbered and titled. A COUNTER Section_Type. |
|
Chartered Accountant (CA) |
An international accounting designation granted to accounting professionals in many countries around the world, aside from the United States. |
|
Citation |
A reference to a published or unpublished source. |
|
Collection |
A subset of the content of a service. A collection is a branded group of online information products from one or more vendors that can be subscribed to/licensed and searched as a complete group. For the COUNTER reporting is restricted to pre-set collections that are defined like databases. See Database. Note: A package or bundle provided by a publisher is not considered a database or a collection. |
|
Component |
A uniquely identifiable constituent part of a content item composed of more than one file (digital object). See Section 3. |
|
Consortium |
A group of institutions joining together to license content. |
Ohiolink |
Consortium member |
An institution that has obtained access to online information resources as part of a consortium. A consortium member is defined by a subset of the consortium’s range of IP addresses or by other specific authentication details. |
Ohio State University |
Content host |
A website that provides access to content typically accessed by patrons of libraries and other research institutions. |
|
Content item |
A generic term describing a unit of content accessed by a user of a content host. Typical content items include articles, books, chapters, multimedia, etc. |
|
Content provider |
An organization whose function is to commission, create, collect, validate, host, distribute, and trade information in electronic form. |
Any publisher, the Metropolitan Museum, Magnum, JSTOR |
Controlled |
An access type. At the time of the transaction, the content item was not open (i.e. was behind a paywall) because access is restricted to authorized users. Access of content due to a trial subscription would be considered Controlled notOther_Free_to_Read. |
|
Copyright holder |
A person or a company who owns any one of the Exclusive Rights of copyright in a work. |
|
Corrected Version of Record |
A version of the Version of Record of a journal article in which errors in the VoR have been corrected. The errors could be author errors, publisher errors, or other processing errors. |
|
COUNTER compliance pending |
Status of a vendor who is currently not compliant but whose audit is in progress or scheduled. |
|
COUNTER Report Validation Tool |
An online tool to validate COUNTER reports in JSON and tabular format. |
|
COUNTER_SUSHI API |
A RESTful implementation of SUSHI automation intended to return COUNTER Release 5 reports and snippets of COUNTER usage in JSON format. |
|
Crawler |
See Internet robot, crawler, spider. |
|
Created |
COUNTER Element Name. The date and time the usage was prepared, in RFC3339 date-time format (yyyy-mm-ddThh:mm:ssZ). |
|
Created by |
COUNTER Element Name. The name of the organization or system that created the COUNTER report. |
|
Crossref |
A not-for-profit membership organization for publishers. |
|
Customer |
An individual or organization that can access a specified range of the Content provider’s services and/or content and is subject to terms and conditions agreed with the Content provider. |
|
Customer_ID |
The field in the COUNTER reports that indicates whose usage is being reported. May be a proprietary or standard value such as ISNI. |
ISNI=000000012150090X |
Data harvesting |
Automated processes used for extracting data from websites. |
|
Data_Repository |
An online database service; an archive that manages the long-term storage and preservation of digital resources and provides a catalogue for discovery and access. A COUNTER host type. |
Figshare |
Data Types, Data_Type |
The field identifying type of content. COUNTER recognizes the following Data_Types:
|
|
Database |
A collection of electronically stored data or unit records (facts, bibliographic data, texts) with a common user interface and software for the retrieval and manipulation of data. (NISO) A COUNTER Data_Type used when reporting search activity at the database level. |
Social Science Abstracts, Reaxys |
Dataset |
See Data_Type. |
|
Database Master Report |
A report that contains additional filters and breakdowns beyond those included in the standard COUNTER reports and are aggregated to the database level. |
|
Delayed open access |
At the time of the transaction, the content item published in a subscription journal is free to read after an embargo period. See OA_Delayed. |
|
Digital Object Identifier |
See DOI. |
|
Discovery Layer |
A web-accessible interface for searching, browsing, filtering, and otherwise interacting with indexed metadata and content. The searches produce a single, relevancy-ranked results set, usually displayed as a list with links to full content, when available. Typically, discovery layers are customizable by subscribing libraries and may be personalized by individual users. |
|
Discovery service |
A pre-harvested central index coupled with fully featured discovery layer. |
EDS, Primo, Summon |
Discovery services provider |
An organization that hosts a discovery service. |
EBSCOhost (EDS), ProQuest (Primo/Summon) |
Distributed Usage Logging (DUL) |
A peer-to-peer channel for the secure exchange and processing of COUNTER-compliant private usage records from hosting platforms to publishers. |
|
DNS lookups |
Domain Name System lookups. |
|
DOI (digital object identifier) |
The digital object identifier is a means of identifying a piece of intellectual property (a creation) on a digital network, irrespective of its current location. (www.doi.org) DOIs may be assigned at the title, article/chapter, or component level. |
|
Double-click |
A repeated click on the same link by the same user within a period of 30 seconds. COUNTER requires that double-clicks must be counted as a single click. |
|
Double-click filtering |
A process to remove the potential of over-counting which could occur when a user clicks the same link multiple times. Double-click filtering applies to all metric types. |
|
DR |
Database Master Report. |
|
DR_D1 |
Database Search and Item Usage. A pre-set Standard View of DR showing total item investigations and requests, as well as searches. |
|
DR_D2 |
Database Access Denied. A pre-set Standard View of DR showing where users were denied access because simultaneous use (concurrency) licenses were exceeded, or their institution did not have a license for the database. |
|
DUL |
See Distributed Usage Logging (DUL). |
|
eBook host |
A content host that provides access to eBook and reference work content. |
EBL, EBSCOhost, ScienceDirect |
eBook, E-Book |
Monographic content that is published online. |
|
EC |
Executive Committee. |
|
eJournal |
Serial content that is published online. |
|
eJournal host |
A content host that provides access to online serial publications (journals, conferences, newspapers, etc.) |
ScienceDirect |
Element |
A piece of information to be reported on, displayed as a column heading (and/or in the Report Header) in a COUNTER report. |
Listed for each Master Report in section 4. |
Embargo period |
The period of time before an article is moved out from behind the paywall, i.e. from Controlled to OA_Delayed. |
|
End_Date |
The last date in the range for the usage represented in a COUNTER report. |
|
Enhanced Version of Record |
A version of the Version of Record of a journal article that has been updated or enhanced by the provision of supplementary material. For example, multimedia objects such as audio clips and applets; additional XML-tagged sections, tables, or figures or raw data. |
|
e-Resources |
Electronic resources. |
|
Error_No |
A unique numeric code included as part of a COUNTER SUSHI exception that identifies the type of error that applies to a report. |
|
Exception |
An optional element that may be included within a COUNTER report indicating some difference between the usage that was requested and the usage that is being presented in the report. An exception includes the following elements:
|
3040: Partial Data Returned (request was for 2016-01-01 to 2016-12-31, but usage is only available to 2016-08-30). |
Exclude_Monthly_Details |
Reporting_Period_Total column without month-by-month breakdowns. |
|
Federated search |
A federated search application that allows users to simultaneously search multiple databases hosted by the same or different vendors with a single query from a single user interface. The end user is not responsible for selecting the database being searched. See Appendix G. |
MetaLib, EBSCOhost Connection |
Filter |
See Report filter. |
|
Format |
A COUNTER Element Name used to identify the format of the content. Reserved values include: HTML, PDF, Other. |
|
Full-text database |
A database that consists of full-text articles or other non-textual content beyond bibliographic information and that is sold as a self-contained/pre-set grouping of data. |
|
Full-text article |
The complete text—including all references, figures, and tables—of an article, plus links to any supplementary material published with it. |
|
GDPR |
General Data Protection Regulation. |
|
GET/status |
COUNTER_SUSHI API path. Returns the current status of the COUNTER_SUSHI API service. |
|
GET/reports |
COUNTER_SUSHI API path. Returns a list of reports supported by the COUNTER_SUSHI API service. |
|
GET/members |
COUNTER_SUSHI API path. Returns the list of consortium members or sites for multi-site customers. |
|
Gold Open Access |
See OA_Gold. |
|
Host |
See Content host. |
Ingenta, Semantico, SpringerLink |
Host Site |
See Content host. |
|
Host types |
A categorization of Content Hosts used by COUNTER to facilitate implementation of the Code of Practice. The Code of Practice identifies the Host types that apply to the various artefacts in the Code of Practice, allowing a Content Host to quickly identify the areas of the Code of Practice to implement by identifying the Host Types categories that apply to them. |
E-Journal |
Host UI, host-site UI |
User interface that an end-user would use to access content on the Content host. |
|
HTTP |
HyperText Transfer Protocol. |
|
Hybrid publication |
A publication that is available via a subscription license but also contains articles available as Gold Open Access. |
|
Institution |
The organization for which usage is being reported. |
|
Institution_ID |
A unique identifier for an institution. In COUNTER reports the Institution_ID is presented as a combination of the identifier type and its value. Proprietary identifiers that identify the content platform can be used. |
isni=000000012150090X |
Institution_Name |
The field in the COUNTER reports that indicates the name of the institution. |
|
Institutional identifier |
See Institution_ID. |
|
Intermediary |
See Content provider. |
|
Internet robot, crawler, spider |
Any automated program or script that visits websites and systematically retrieves information from them, often to provide indexes for search engines. See Appendix I. |
|
Investigation |
A category of COUNTER metric types that represent a user accessing information related to a content item (i.e. an abstract or detailed descriptive metadata of an article) or a content item itself (i.e. full text of an article). |
|
IP |
Internet Protocol. |
|
IP address |
Internet protocol (IP) address of the computer on which the session is conducted. May be used by content providers as a means of authentication and authorization and for identifying the institution a user is affiliated with. The identifying network address (typically four 8-bit numbers: aaa.bbb.cc.dd) of the user’s computer or proxy. |
|
IR |
Item Master Report. |
|
IR_A1 |
Journal Article Requests. A pre-set Standard View of IR showing total item requests for journal articles. |
|
IR_M1 |
Multimedia Item Requests. A pre-set Standard View of IR showing total item requests for multimedia items. |
|
ISBN (International Standard Book Number) |
A unique 13-digit number used to identify a book. |
|
ISIL |
International Standard Identifier for Libraries and Related Organizations - https://english.slks.dk/work-areas/libraries/library-standards/isil/ |
|
ISNI (International Standard Name Identifier) |
A unique number used to identify authors, contributors, and distributors of creative works, including researchers, inventors, writers, artists, visual creators, performers, producers, publishers, aggregators, etc. COUNTER defines ISNI as an optional identifier for an institution. |
|
ISO |
International Organization for Standardization. |
|
ISSN (International Standard Serial Number) |
A unique 8-digit number used to identify a print or electronic periodical publication. A periodical published in both print and electronic form may have two ISSNs, a print ISSN and an electronic ISSN. |
|
Issue |
A collection of journal articles that share a specific issue number and are presented as an identifiable unit online and/or as a physically bound and covered set of numbered pages in print. |
|
Issue date |
The date of release by the publisher to customers of a journal issue. When used for COUNTER YOP (year of publication) reporting, the issue date of the print should be used when print and online issue dates differ. |
|
Item |
Collective term for content that is reported at a high level of granularity, e.g. a full-text article (original or a review of other published work), an abstract or digest of a full-text article, a sectional HTML page, supplementary material associated with a full-text article (e.g. a supplementary data set), or non-textual resources such as an image, a video, audio, a dataset, a piece of code, or a chemical structure or reaction. |
Full text article, TOC, Abstract, Database record, Dataset, Thesis |
Item Master Report |
A COUNTER report that provides usage data at the item or item-component level. |
|
Item Reports |
A series of COUNTER reports that provide usage data at the item or item-component level. |
|
Javascript Object Notation |
See JSON. |
|
Journal |
A serial that is a branded and continually growing collection of original articles within a particular discipline. A COUNTER data type. |
Tetrahedron Letters |
Journal DOI |
See DOI. |
|
Journal Reports |
See Title Reports. |
|
Journal Requests |
Journal content items retrieved. |
|
JQuery |
A JavaScript library. |
|
License |
A contract or agreement that provides an organization or individual (licensee) with the right to access certain content. |
|
Limit_Exceeded |
A COUNTER Metric_Type. User is denied access to a content item because the simultaneous user limit for their institution’s license would be exceeded. |
|
Linking_ISSN |
International Standard Serial Number that links together the ISSNs assigned to all instances of a serial publication in the format nnnn-nnn[nX] (JSON reports only). |
|
Log file analysis |
A method of collecting usage data in which the web server records all of its transactions. |
|
Master Reports |
Reports that contain additional filters and breakdowns beyond those included in the standard COUNTER reports. |
|
Metadata |
A series of textual elements that describes a content item but does not include the item itself. For example, metadata for a journal article would typically include publisher, journal title, volume, issue, page numbers, copyright information, a list of names and affiliations of the authors, author organization addresses, the article title and an abstract of the article, and keywords or other subject classifications. |
|
Metadata provider |
An organization, such as a publisher, that provides descriptive article/item-level metadata to an online search service. |
|
Metric Types, Metric_Types |
An attribute of COUNTER usage that identifies the nature of the usage activity. See Sections 4.1.3; 4.2.3; 4.3.3; 4.4.3. |
Total_Requests |
Monograph Text |
See Book. |
|
Multimedia |
Non-textual media such as images, audio, and video. |
|
Multimedia collection |
A grouping of multimedia items that are hosted and searched as a single unit and behave like a database. A COUNTER host type. See also Database. |
|
Multimedia host |
A content host that provides access to multimedia content. |
|
Multimedia item |
An item of non-textual media content such as an image or streaming or downloadable audio or video files. (Does not include thumbnails or descriptive text/metadata.) |
|
NISO |
The National Information Standards Organization is a United States non-profit standards organization that develops, maintains and publishes technical standards related to publishing, bibliographic and library applications. [Wikipedia] |
|
Namespace |
A term primarily used in programming languages where the same name may be used for different objects. It is created to group together those names that might be repeated elsewhere within the same or interlinked programs, objects and elements. For example, an XML namespace consists of element types and attribute names. Each of the names within that namespace is only related/linked to that namespace. The name is uniquely identified by the namespace identifier ahead of the name. For example, Namespace1_John and Namespace2_John are same names but within different namespaces. |
|
Newspaper or Newsletter |
Textual content published serially in a newspaper or newsletter. |
|
No_License |
A COUNTER Metric_Type. User is denied access to a content item because the user or the user’s institution does not have access rights under an agreement with the vendor. |
|
OA |
See Open access. |
|
OA_Delayed |
A COUNTER Access_Type. At the time of the transaction, the content item was available as open access because publisher’s embargo period had expired (delayed open access). |
|
OA_Gold |
A COUNTER Access_Type. At the time of the transaction, the content item was immediately and permanently available as open access because an APC (article processing charge) has been paid. Content items may be in hybrid publication or fully open access publication. Note that content items offered as delayed open access (open after an embargo period) would be classified as OA_Delayed. |
|
OCLC |
OCLC (Online Computer Library Center). An American non-profit cooperative organization “dedicated to the public purposes of furthering access to the world’s information and reducing information costs”. It was founded in 1967 as the Ohio College Library Center. [Wikipedia] |
|
Online_ISSN |
A COUNTER identifier for the ISSN assigned to the online manifestation of a serial work. See also ISSN. |
1533-4406 |
Open access |
Open Access (OA) refers to online research outputs that are free of all restrictions on access (e.g. access tolls) and free of many restrictions on use (e.g. certain copyright and license restrictions). Open access can be applied to all forms of published research output, including peer-reviewed and non-peer-reviewed academic journal articles, conference papers, theses, book chapters, and monographs. [Wikipedia] |
|
ORCID |
An international standard identifier for individuals (i.e. authors) to use with their name as they engage in research, scholarship, and innovation activities. A COUNTER identifier for item contributors. See http://orcid.org. |
|
Other |
A content item or section that cannot be classified by any of the other data types. |
|
Other_Free_to_Read |
A COUNTER Access_Type. At the time of the transaction, the content item was freely available for reading for reasons such as promotions. This also covers all journals where all articles are free to all users because the journal is funded through advertising. |
|
Page tag |
Page-tagging is a method of collecting usage data that uses, for example, JavaScript on each page to notify a third-party server when a page is rendered by a web-browser. |
|
Parent |
In COUNTER Item Reports the parent is the publication an item is part of. For a journal article, the parent is the journal, and for a book chapter it is the book. |
|
Paywall |
A term used to describe the fact that a user attempting to access a content item must be authorized by license or must pay a fee before the content can be accessed. |
|
Portable Document Format, file formatted for the Adobe Acrobat reader. Items such as full-text articles or journals published in PDF format tend to replicate the printed page in appearance. |
||
PHP |
Hypertext Preprocessor is a server-side scripting language designed for web development. The PHP reference implementation is now produced by The PHP Group. [Wikipedia] |
|
Platform |
An interface from an aggregator, publisher, or other online service that delivers the content to the user and that counts and provides the COUNTER usage reports. |
Wiley Online Library, HighWire |
Platform Master Report |
A Report that contains additional filters and breakdowns beyond those included in the standard COUNTER reports, and which are aggregated to the platform level. |
|
Platform Reports |
A series of COUNTER reports that provide usage aggregated to the platform level. |
|
Platform search |
Search conducted by users of a Platform. |
|
Platform usage |
Activity across all metrics for entire platforms. |
|
PR |
Platform Master Report. |
|
PR_P1 |
Platform Usage. A pre-set Standard View of PR showing total and unique item requests, as well as platform searches. |
|
Print_ISSN |
A COUNTER identifier for the ISSN assigned to the print manifestation of a work. See also ISSN. |
0028-4793 |
Proprietary Identifier |
See Proprietary_ID. |
|
Proprietary_ID |
A COUNTER identifier for a unique identifier given by publishers and other content providers to a product or collection of products. |
|
Provider ID |
A unique identifier for a content provider and used by discovery services and other content sites to track usage for content items provided by that provider. |
|
Publication Date, Publication_Date |
An optional field in COUNTER item reports and Provider Discovery Reports. The date of release by the publisher to customers of a content item. |
|
Publisher |
An organization whose function is to commission, create, collect, validate, host, distribute and trade information online and/or in printed form. |
Sage, Cambridge University Press |
Publisher_ID |
A COUNTER identifier for a publisher’s unique identifier. In COUNTER reports the publisher ID is presented as a combination of identifier type and value. |
|
R4 |
Release 4. |
|
R5 |
Release 5. |
|
Reference work |
An authoritative source of information about a subject used to find quick answers to questions. The content may be stable or updated over time. |
Dictionary, encyclopedia, directory, manual, guide, atlas, bibliography, index |
References |
A list of works referred to in an article or chapter with sufficient detail to enable the identification and location of each work. |
|
Registry of compliance |
The COUNTER register of content providers compliant with the COUNTER Code of Practice. |
|
Regular |
A COUNTER Access_Method. Indicates that usage was generated by a human user browsing/searching a website, rather than by text and data mining processes. |
|
Regular search |
A search conducted by a user on a host where the user is in control over which databases can be searched. |
|
Release |
Version of the COUNTER Code of Practice. |
|
Report Attribute, Report_Attributes |
A series of zero or more report attributes applied to the report. Typically, a report attribute affects how the usage is presented, but does not change the totals. |
Exclude_Report_Header; |
Report filters |
In COUNTER reports the report filter can be used to limit the usage returned. |
Data_Type=journal |
Report_ID |
The alphanumeric identifier of a specific report Standard View. |
DR_D1: Database Search and Item Usage. |
Report item attributes |
A series of elements that describe the nature of usage for an item and may include Access_Type, YOP, etc. |
|
Report name |
The name of a COUNTER report. |
Journal Title Report 1 |
Report validation tool |
See COUNTER Report Validation Tool. |
|
Reporting period, Reporting_Period |
The total time period covered in a usage report. |
|
Repository |
A host who provides access to an institution’s research output. Includes subject repositories, institution, department, etc. |
Cranfield CERES |
Repository item |
A content item hosted in a repository, including one that consists of one or more digital objects such as text files, audio, video or data, described by associated metadata. |
|
Request |
A category of COUNTER Metric Types that represents a user accessing content (i.e. full text of an article). |
Total_Item_Requests |
Requestor ID |
A system-generated hash identifier that uniquely identifies a requestor session. |
|
Required reports |
The COUNTER reports that Host_Types are required to provide. |
|
Research data |
Data that supports research findings and may include databases, spreadsheets, tables, raw transaction logs, etc. |
|
RESTful COUNTER_SUSHI API |
A RESTful implementation of SUSHI automation intended to return COUNTER Release 5 reports and snippets of COUNTER usage in JSON format. |
|
Return code |
Defined and maintained by W3C (http://www.w3.org/Protocols/HTTP/HTRESP.html). |
|
Robot |
See Internet robot, crawler, spider. |
|
Scholarly Collaboration Network |
A service used by researchers to share information about their work. |
Mendeley, Reddit/Science |
Scholarly Collaboration Network data aggregator |
A host who provides access to metrics on communications and interactions on scholarly collaboration networks. |
Altmetric.com |
Screen scraping |
The action of using a computer program to copy data from a website. |
|
Search |
A user-driven intellectual query, typically equated to submitting the search form of the online service to the server. |
|
Search engine |
A service that allows users to search for content via the World Wide Web. |
|
Searches_Regular |
A COUNTER Metric Type used to report on searches conducted by a user on a host where the user is in control over which databases can be searched. Note: If a search is conducted across multiple databases, each database searched can count that search. See also Regular search. |
|
Searches_Automated |
A COUNTER Metric Type used to report searches conducted through a discovery service or by an automated search agent. See also Automated search. |
|
Searches_Federated |
A COUNTER Metric Type used to report searches conducted through a federated search service. See Appendix G. See also Federated search. |
|
Searches_Platform |
A COUNTER Metric Type used to report searches conducted on a platform. Note: Searches conducted against multiple databases on the platform will only be counted once. |
|
Section |
The first level of subdivision of a book or reference work. |
Chapter, entry |
Section Types, Section_Type |
A COUNTER attribute that identifies the type of section that was accessed by the user. |
Article, book, chapter |
Serial |
A publication in any medium issued in successive parts bearing numerical or chronological designations and intended to be continued indefinitely. This definition includes periodicals, newspapers, and annuals (reports, yearbooks, monographic series). (NISO) |
|
Server-side scripting language |
Server-side scripting is a technique used in web development which involves employing scripts on a web server which produce a response customized for each user’s request to the website. The alternative is for the web server itself to deliver a static web page. [Wikipedia] |
|
Service |
See Content host. |
ScienceDirect, Academic Universe |
Session |
A successful request of an online service. A single user connects to the service or database and ends by terminating activity that is either explicit (by leaving the service through exit or logout) or implicit (timeout due to user inactivity). (NISO) |
|
Session cookie |
A data file that a web server can place on a browser to track activity by a user and attribute that usage to a session. |
|
Session ID |
A unique identifier for a single user session or, in case of a double-click, multiple clicks on the same link within 30 seconds of each other. |
|
Sites |
See Hosts. |
|
Spider |
See Internet robot, crawler, spider. |
|
Standard View |
A pre-defined version of a Master report, designed to meet the most common needs. |
Book Requests (Excluding OA_Gold) |
Standardized Usage Statistics Harvesting Initiative |
See SUSHI. |
|
SUSHI |
An international standard (Z39-93) that describes a method for automating the harvesting of reports. COUNTER_SUSHI is an implementation of this standard for harvesting COUNTER reports. COUNTER compliance requires content hosts to implement COUNTER_SUSHI. |
|
TAB Separated Value |
See TSV. |
|
TDM |
Text and data mining (TDM) is a computational process whereby text or datasets are crawled by software that recognizes entities, relationships, and actions. (STM Publishers) An Access_Method in a COUNTER report used to separate regular usage from usage that represents access to content for the purposes of text and data mining. |
|
Text and data mining |
See TDM. |
|
Thesis_Or_Dissertation |
A COUNTER data type. Dissertation: a long essay on a particular subject, especially one written as a requirement for the Doctor of Philosophy degree. Thesis: a long essay or dissertation involving personal research, written by a candidate for a college degree. |
|
Title |
The name of a book, journal, or reference work. |
|
Title Master Report |
A report that contains additional filters and breakdowns beyond those included in the standard COUNTER reports and are aggregated to publication title level rather than towards individual articles/chapters. |
|
Title Reports |
A series of COUNTER reports where usage is aggregated to the publication title level. |
|
TLS (HTTPS) |
Transport Layer Security (TLS) protocol, Hypertext Transfer Protocol Secure (HTTPS) protocol. |
|
Total_Items_Investigations |
A COUNTER Metric_Type that represents the number of times users accessed the content (i.e. full text) of an item, or information describing that item (i.e. an abstract). |
|
Total_Item_Requests |
A COUNTER Metric_Type that represents the number of times users requested the full content (i.e. full text) of an item. Requests may take the form of viewing, downloading, emailing, or printing content, provided such actions can be tracked by the content provider’s server. |
|
TR |
Title Report. |
|
TR_B1 |
Book Requests (Excluding “OA_Gold”). A pre-set book filter of TR showing full text activity for all content which is not Gold Open Access. Numbers between sites will vary based on whether the content is delivered as a complete book or by chapter. |
|
TR_B2 |
Book Access Denied. A pre-set book filter of TR showing where users were denied access because simultaneous use (concurrency) licenses were exceeded, or their institution did not have a license for the database. |
|
TR_B3 |
Book Usage by Access Type. A pre-set book filter of TR showing all applicable metric types broken down by Access_Type. |
|
TR_J1 |
Journal Requests (Excluding OA_Gold). A pre-set journal filter of TR showing full text activity for all content which is not Gold Open Access. |
|
TR_J2 |
Journal Accessed Denied. A pre-set journal filter of TR showing all applicable metric types broken down by Access_Type. |
|
TR_J3 |
Journal Usage by Access Type. A pre-set journal filter of TR showing all applicable metric types broken down by Access_Type. |
|
TR_J4 |
Journal Requests by YOP (excluding OA_Gold). A pre-set journal filter of TR breaking down the full text usage of non-Gold Open Access content by year of publication (YOP). |
|
Transaction |
A usage event. |
|
TSV |
Tab Separated Values. |
|
Turnaway |
See Access denied. |
|
Unique item |
Matchless content item. |
|
Unique_Item_Investigations |
A COUNTER Metric Type that represents the number of unique Content Items investigated in a user-session. |
|
Unique_Item_Requests |
A COUNTER Metric Type that represents the number of unique content items investigated in a user-session. Examples of items are articles, book chapters, and multimedia files. |
|
Unique title |
Matchless book title. |
|
Unique_Title_Investigations |
A COUNTER Metric Type that represents the number of unique titles investigated in a user-session. Examples of titles are journals and books. |
|
Unique_Title_Requests |
A COUNTER Metric Type that represents the number of unique titles requested in a user session. Examples of titles are journals and books. |
|
URI |
In information technology, a Uniform Resource Identifier (URI) is a string of characters used to identify a resource. Such identification enables interaction with representations of the resource over a network, typically the World Wide Web, using specific protocols. [Wikipedia] An optional element on a COUNTER report used to identify the item for which usage is being reported. |
|
URL |
Uniform Resource Locator. The address of a World Wide Web page. |
|
URN |
Uniform Resource Name, which identifies a resource by name in a particular namespace. |
|
Usage attributes |
Fields or elements used to classify or qualify COUNTER usage for analysis. |
Access_Type |
User |
A person who accesses the online resource. |
|
User agent |
An identifier that is part of the HTTP/S protocol that identifies the software (i.e. browser) being used to access the site. May be used by robots to identify themselves. |
|
User cookie |
A small piece of data sent from a website and stored on the user’s computer by the user’s web browser while the user is browsing. |
|
User session |
See Session. |
|
UTF-8 |
UTF-8 is a variable width character encoding capable of encoding all 1,112,064 valid code points in Unicode using one to four 8-bit bytes. The encoding is defined by the Unicode Standard, and was originally designed by Ken Thompson and Rob Pike. The name is derived from Unicode Transformation Format - 8-bit. [Wikipedia] |
|
Vendor |
A publisher or other online information provider who delivers licensed content to the customer and with whom the customer has a contractual relationship. |
Taylor & Francis, EBSCO |
Version of Record |
A fixed version of a journal article that has been made available by any organization that acts as a publisher that formally and exclusively declares the article “published”. |
|
W3C |
The World Wide Web Consortium is the main international standards organization for the World Wide Web. [Wikipedia] |
|
XML |
eXtensible Markup Language. |
|
Year of Publication |
See YOP. |
|
YOP |
Calendar year in which an article, item, issue, or volume is published. For the COUNTER_YOP attribute, use the year of publication for the print when it differs from the online. |
|
Z39.50 |
An international standard protocol created by NISO for search. A Z39.50 client can search any Z39.50-compatible online service. Often used by federated search services to facilitate searching content at other sites. |
Appendix B: Changes from Previous Releases¶
B.1 Changes from COUNTER Release 4 (R4)¶
Changes in the nature of online content and how it is accessed have resulted in the COUNTER Code of Practice evolving in an attempt to accommodate those changes. This evolution resulted in some ambiguities and, in some cases, conflicts and confusions within the Code of Practice. Release 5 (R5) of the COUNTER Code of Practice is focused on improving the consistency, credibility, and comparability of usage reporting.
B.1.1 List of Reports¶
R5 reduces the overall number of reports by replacing many of the special-purpose reports that are seldom used with four Master Reports and a number of Standard Views that are more flexible. All COUNTER R4 reports have either been renamed or eliminated in favour of R5 Master Report or Standard View options.
R4 report |
R5 Report/Status |
Comments |
---|---|---|
Book Report 1: Number of Successful Title Requests by Month and Title |
Book Requests (Excluding OA_Gold) |
The Unique_Title_Requests metric is equivalent to the full-text requests in Book Report 1. |
Book Report 2: Number of Successful Section Requests by Month and Title |
Book Requests (Excluding OA_Gold) |
The Total_Item_Requests metric is equivalent to full text requests in Book Report 2. |
Book Report 3: Access Denied to Content Items by Month, Title and Category |
Book Access Denied |
Limit_Exceeded and No_License metrics are equivalent to those found in Book Report 3. |
Book Report 4: Access Denied to Content items by Month, Platform and Category |
Eliminated (no equivalent) |
“Book Access Denied” can be used to provide summary statistics by platform. For book collections the denials would be reported in “Database Access Denied”. |
Book Report 5: Total Searches by Month and Title |
Eliminated (no equivalent) |
For most platforms, attempting to track searches by titles is not reasonable since all titles are included in most searches. |
Book Report 7: Number of Successful Unique Title Requests by Month and Title in a Session |
Book Requests (Excluding OA_Gold) |
The Unique_Title_Requests metric is equivalent to the full-text requests in Book Report 7. |
Consortium Report 1: Number of Successful Full-Text Journal Article or Book Chapter Requests by Month and Title |
Eliminated |
Consortium administrators will request “Journal Requests (Excluding OA_Gold)” for each member. This can be automated via the COUNTER_SUSHI API using the /members path. Tools will be provided to create consolidated reports that are functionally equivalent to Consortium Report 1. |
Consortium Report 2: Total Searches by Month and Database |
Eliminated |
Consortium administrators will request “Database Usage” for each member. This can be automated via the COUNTER_SUSHI API using the /members path. Tools will be provided to create consolidated reports that are functionally equivalent to Consortium Report 2. |
Consortium Report 3: Number of Successful Multimedia Full Content Unit Requests by Month and Collection |
Eliminated |
For multimedia collections that are equivalent to databases, consortium administrators will request “Database Usage” for each member. This can be automated via the COUNTER_SUSHI API using the /members path. Tools will be provided to create consolidated reports that are functionally equivalent to Consortium Report 3. |
Database Report 1: Total Searches, Result Clicks and Record Views by Month and Database |
Database Usage |
Result Clicks and Record Views have been replaced by Total_Item_Investigations. Metrics for regular searches remains unchanged, and federated and automated searches are now reported separately. The report also includes Access Denied and Requests metrics. |
Database Report 2: Access Denied by Month, Database and Category |
Database Access Denied |
Report renamed and updated Metric_Types used. |
Journal Report 1: Number of Successful Full-Text Article Requests by Month and Journal |
Journal Requests (Excluding OA_Gold) |
Total_Item_Requests is the equivalent to full text total. HTML and PDF totals have been eliminated, but Unique_Item_Requests can be used to evaluate the effect of the user interface on statistics and offers a comparable statistics for cost-per-unique-use analysis. |
Journal Report 1 GOA: Number of Successful Gold Open Access Full-Text Article Requests by Month and Journal |
Title Master Report |
The Title Master Report can be filtered by “Access_Type=OA_Gold; Metric_Type=Total_Item_Requests” to obtain equivalent results. |
Journal Report 1a: Number of Successful Full-Text Article Requests from an Archive by Month and Journal |
Journal Requests by YOP (Excluding OA_Gold) |
The R5 report breaks out usage by year of publication (YOP) to enable evaluation of usage of content for which perpetual access rights are available. |
Journal Report 2: Access Denied to Full-Text Articles by Month, Journal and Category |
Journal Access Denied |
The Limit_Exceeded and No_License metrics are equivalent to corresponding metrics in R4 report. |
Journal Report 3: Number of Successful Item Requests by Month, Journal and Page-type |
Title Master Report, Item Master Report |
The Title Master Report can be configured to show Section_Types, which provides details similar to JR3. Other details like the audio and video usage can be reported in the Item Master Report (using the Component elements where appropriate). |
Journal Report 3 Mobile: Number of Successful Item Requests by Month, Journal and Page-type for usage on a mobile device |
Eliminated (no equivalent) |
Capturing usage by mobile devices is less relevant with the responsive design of most sites. The variety of mobile devices also makes it difficult, as does the fact that today’s smartphones have screen resolutions that exceed those of some desktops. |
Journal Report 4: Total Searches Run By Month and Collection |
Eliminated (no equivalent) |
To the extent that a journal collection is organized for searching as a discrete collection (rare), usage would be reported in “Database Usage”. |
Journal Report 5: Number of Successful Full-Text Article Requests by Year-of-Publication (YOP) and Journal |
Journal Requests by YOP (Excluding OA_Gold) |
This R5 report offers a breakdown of journal usage by year of publication (YOP) and the resulting report can be analysed using filters or pivot tables. |
Multimedia Report 1: Number of Successful Full Multimedia Content Unit Requests by Month and Collection |
Database Usage |
Multimedia usage, where multimedia is packaged and accessed as separate collections, would be reported using “Database Usage”. |
Multimedia Report 2: Number of Successful Full Multimedia Content Unit Requests by Month, Collection and Item Type |
Multimedia Item Requests |
The R5 report provides a more detailed breakdown by item and includes attributes such as Data_Type. This report can be used to provide summary statistics by type. |
Platform Report 1: Total Searches, Result Clicks and Record Views by Month and Platform |
Platform Usage |
The R5 report provides equivalent metrics as well as additional metrics related to item full-text requests. |
Title Report 1: Number of Successful Requests for Journal Full-Text Articles and Book Sections by Month and Title |
Title Master Report |
The Title Master Report offers a single report for books and journals and can show the usage broken down by Section_Type. |
Title Report 1 Mobile: Number of Successful Requests for Journal Full-Text Articles and Book Sections by Month and Title (formatted for normal browsers/delivered to mobile devices AND formatted for mobile devices/delivered to mobile devices |
Eliminated (no equivalent) |
Capturing usage by mobile devices is less relevant with the responsive design of most sites. The variety of mobile devices also makes it difficult, as does the fact that today’s smartphones have screen resolutions exceeding those of some desktops. |
Title Report 2: Access Denied to Full-Text Items by Month, Title and Category |
Title Master Report |
The Title Master Report offers a single report for books and journals and includes the options to show Access Denied metrics. |
Title Report 3: Number of Successful Item Requests by Month, Title and Page Type |
Title Master Report |
The Title Master Report offers a single report for books and journals and can show Requests metrics. |
Title Report 3 Mobile: Number of Successful Item Requests by Month, Title and Page Type (formatted for normal browsers/delivered to mobile devices AND formatted for mobile devices/delivered to mobile devices |
Eliminated (no equivalent) |
Capturing usage by mobile devices is less relevant with the responsive design of most sites. The variety of mobile devices also makes it difficult, as does the fact that today’s smartphones have screen resolutions exceeding those of some desktops. |
B.1.2 Report Format¶
With R5, all COUNTER reports are structured the same way to ensure consistency, not only between reports, but also between the JSON and tabular versions of the reports. Now all reports share the same format for the header, the report body is derived from the same set of element names, total rows have been eliminated, and data values are consistent between the JSON and tabular version. (See Section 3.2). R5 also addresses the problem of terminology and report layouts varying from report to report, as well as JSON and tabular versions of the same report producing different results while still being compliant.
B.1.3 Metric Types¶
Release 5 of the COUNTER Code of Practice strives for simplicity and clarity by reducing the number of metric types and standardizing them across all reports, as applicable. With R4, Book Reports had different metric types from those in Journal Reports or in additional attributes such as mobile usage, usage by format, etc. Most COUNTER R4 metric types have either been renamed or eliminated in favour of new R5 Metric_Types. The table below show the R4 metric types as documented for SUSHI and their R5 state.
R4 Metric Types |
R5 Equivalence or Status |
Comments |
---|---|---|
abstract |
Total_Item_Investigations |
Actions against an item are tracked using the more generic Total_Item_Investigations metric. Due to the variety of types of item attributes that can be investigated, COUNTER no longer attempts to track them with separate Metric_Types. |
audio |
Eliminated |
This metric was only used in JR3/TR3 reports which saw little implementation or use. The intent was to represent activity of objects embedded in articles. |
data_set |
Eliminated |
When a content item was a data_set, the Total_Item_Requests metrics would be used in combination with a Data_Type of Dataset. |
ft_epub |
Total_Item_Requests |
More generic Total_Item_Requests are now used in place of format-specific metrics. |
ft_html |
Total_Item_Requests |
More generic Total_Item_Requests are now used in place of format-specific metrics. |
ft_html_mobile |
Eliminated |
Tracking of activity by mobile devices is no longer required for COUNTER compliance. |
ft_pdf |
Total_Item_Requests |
More generic Total_Item_Requests are now used in place of format-specific metrics. |
ft_pdf_mobile |
Eliminated |
Tracking of activity by mobile devices is no longer required for COUNTER compliance. |
ft_ps |
Total_Item_Requests |
More generic Total_Item_Requests are now used in place of format-specific metrics. |
ft_ps_mobile |
Eliminated |
Tracking of activity by mobile devices is no longer required for COUNTER compliance. |
ft_total |
Total_Item_Requests |
Total_Item_Requests is a comparable metric. |
image |
Eliminated |
This metric was only used in JR3/TR3 reports which saw little implementation or use. The intent was to represent activity of objects embedded in articles. |
multimedia |
Total_Item_Requests |
More generic Total_Item_Requests are now used in place of format-specific metrics. |
no_license |
No_License |
No change. |
other |
Eliminated |
Other usage provides no value. |
podcast |
Eliminated |
This metric was only used in JR3/TR3 reports which saw little implementation or use. The intent was to represent activity of objects embedded in articles. |
record_view |
Total_Item_Investigations |
Actions against an item are tracked using the more generic Total_Item_Investigations metrics. Due to the variety of types of item attributes that can be investigated, COUNTER no longer attempts to track them with separate Metric_Types. |
reference |
Total_Item_Investigations |
Actions against an item are tracked using the more generic Total_Item_Investigations metrics. Due to the variety of types of item attributes that can be investigated, COUNTER no longer attempts to track them with separate Metric_Types. |
result_click |
Total_Item_Investigations |
Actions against an item are tracked using the more generic Total_Item_Investigations metrics. Due to the variety of types of item attributes that can be investigated, COUNTER no longer attempts to track them with separate Metric_Types. |
search_fed |
Searches_Federated |
The R4 automated and federated search metrics have been separated into two separate metrics since the nature of the activity is very different. |
search_reg |
Searches_Regular |
For database reports, use Searches_Regular. When reporting at the platform level use Searches_Platform. |
sectioned_html |
Total_Item_Requests |
More generic Total_Item_Requests are now used in place of format-specific metrics. |
toc |
Total_Item_Investigations |
Actions against an item are tracked using the more generic Total_Item_Investigations metrics. Due to the variety of types of item attributes that can be investigated, COUNTER no longer attempts to track them with separate Metric_Types. Note that for journals TOCs aren’t item-level objects, therefore TOC usage MUST NOT be reported for journals. |
turnaway |
Limit_Exceeded |
Renamed to provide more clarity into the nature of the access-denied event. |
video |
Eliminated |
This metric was only used in JR3/TR3 reports which saw little implementation or use. The intent was to represent activity of objects embedded in articles. |
B.1.4 New elements and attributes introduced¶
With R4 the nature of the usage sometimes had to be inferred based on the name of the report. In an effort to provide more consistent and comparable reporting, R5 introduces some additional attributes that content providers can track with the usage and use to create breakdowns and summaries of usage.
Attribute |
Description |
Values |
---|---|---|
Access_Type |
Used in conjunction with Investigations and Requests, this attribute indicates if, at the time of the investigation or request, access to the item was controlled (e.g. subscription or payment required) or was available as Open Access or other free-to-read option. |
Controlled |
Access_Method |
This attribute is used to distinguish between regular usage (users accessing scholarly information for research purposes) and usage for the purpose of Text and Data Mining (TDM). |
Regular |
Data_Type |
Used to generally classify the nature of item the usage is being presented for. |
Article |
Publisher_ID |
A unique identifier for the publisher, preferably to a standard identifier such as ISNI. For the JSON version of the report, the type (namespace) and value are separate. For tabular, the format is {namespace}:{value}. |
isni:123334445 |
Section_Type |
Used in conjunction with Data_Type, this attribute tracks requests to the level of the section requested. Used mostly with books where content may be delivered by chapter or section, this element defines the nature of the section retrieved. |
Article |
YOP |
This attribute records the year of publication of the item. The YOP attribute replaces the year-of-publication ranges in R4’s JR5 report and is tracked for all metrics in Title and Item Reports. |
A 4-digit year, e.g. 2012 |
Appendix C: Vendor/Aggregator/Gateway Declaration of COUNTER Compliance¶
We <name of Content Provider> (‘The Company’) hereby confirm the following:
- That the following online usage reports that are supplied by The Company to its customers, and which The Company claims to be ‘COUNTER-compliant’, conforming to Release 5 of the COUNTER Code of Practice:<insert list COUNTER-compliant reports>
The Company agrees that it will implement the protocols specified in Section 7 of Release 5 of the Code of Practice to correct for the effects of federated searches and internet robots on usage statistics.
Where The Company supplies to customers online usage statistics not included in the usage reports covered in 1 above, but which use terms defined in the COUNTER Code of Practice, that the definitions used by The Company are consistent with those provided in the COUNTER Code of Practice.
The Company will pay to COUNTER the Vendor Registration Fee (£350/US$500), unless The Company is a Member of COUNTER in good standing, for whom this fee is waived.
That to maintain COUNTER-compliant status, the usage reports provided by The Company to its customers will be independently audited according to a schedule and standards specified by COUNTER.
Appendix D: Guidelines for Implementation¶
Our Friendly Guide To Release 5 Technical Notes for Providers provides guidelines for implementation.
Appendix E: Audit Requirements and Tests¶
E.1 General Auditing Requirements¶
Audit Philosophy¶
The COUNTER audit procedures and tests set out in this Appendix seek to ensure that the usage reports provided by content providers are in line with the COUNTER Code of Practice and follow uniform agreed procedures. To this end, the COUNTER audit seeks to mirror the activity of an institution (a customer) carrying out usage on the content provider’s platform.
Third Party Hosts and Vendors¶
Two broad categories must be taken into account for usage statistics reporting, and each has additional audit requirements. These categories are:
Third-party hosts: Some publishers have their online content hosted by a third party that provides standard usage statistics reporting as part of a broader hosting service. In these cases, it is the third-party host that is audited. For the audit the third-party host must provide the auditor with a list of all publishers hosted by them and the COUNTER Reports and Standard Views offered by each. The auditor will then select a minimum of two publishers at random from the list and carry out the audit tests as specified below on the selected publishers.
Third-party vendors: Some publishers use third-party companies that provide bespoke usage-statistics reporting services, where the solutions used may differ significantly for each client publisher. In this case the third-party vendor must provide the auditor with a list of all their client publishers and the COUNTER Reports and Standard Views offered by each. The auditor will then select 10% of the publishers (up to a maximum of 5, with a minimum of 2) from this list and carry out the audit tests specified below.
No two third-party hosts/vendors are exactly alike. Prior to the audit each must discuss with COUNTER how they provide usage statistics reporting so that COUNTER can advise which of the two categories above applies to them.
Auditing and Test-Scripts¶
There are three stages in the COUNTER audit:
Stage 1: Format. Here the auditor reviews usage reports to confirm that they adhere to the COUNTER Code of Practice specification, not only in terms of overall format, but to make sure relevant identifiers, such as ISSNs and ISBNs, are presented correctly. Deviations from the specified COUNTER-compliant format, such as blank rows not required by the code specification or incorrectly formatted ISSNs, can cause problems when the COUNTER usage reports are processed automatically.
Stage 2: Data Integrity. Here the auditor confirms that the usage statistics reported by the content provider accurately record the activity carried out by the auditor during the audit. This includes checking that the content provider provides consistent usage statistics when its reports are accessed using different browsers, including Google Chrome, Internet Explorer, and Mozilla Firefox as a minimum. Note: COUNTER will review the selected browsers annually. The selection may change in the future, depending on which browsers are most widely used.
Stage 3: Report Delivery. Here the auditor tests that the content provider has implemented SUSHI correctly and that its reports can be accessed using SUSHI according to the instructions supplied by the content provider (which must comply with the NISO/COUNTER SUSHI standard). Implementation of SUSHI is a requirement for compliance and is covered by the Declaration of COUNTER Compliance signed by all compliant content providers. Delivery of reports via Excel or .tab separated value (TSV) file will still be required as specified in the COUNTER Code of Practice.
COUNTER defines specific audit test-scripts for each of the COUNTER-required usage reports. Because content providers may work with different auditors, the test-scripts help to ensure that each auditor follows a common auditing procedure.
The COUNTER auditor cannot express an opinion as to usage reported in respect of any other accounts or institutions, or as to aspects of the COUNTER Code of Practice, not specifically tested. Release 5-compliant content providers are reminded, however, that their signed Declaration of COUNTER compliance also covers these aspects of the COUNTER Code of Practice.
1. Frequency of the Audit¶
To maintain COUNTER-compliant status an independent audit is required within 6 months of a content provider being listed in the Register of COUNTER Compliant Content Providers and annually thereafter. (Excepted are content providers that are members of COUNTER in the Smaller Publisher category, which may be audited biennially, with permission from COUNTER). Failure to meet these audit requirements will result in a content provider losing its COUNTER-compliant status.
If COUNTER does not receive a satisfactory auditor’s report within the specified time, the following control procedures apply:
New content providers having signed the Declaration of Compliance:
6 months after signing |
A reminder from COUNTER that the first auditor’s report is required |
8 months after signing |
A final reminder from COUNTER that the first auditor’s report is required |
9 months after signing |
The content provider is removed from the registry and is notified by COUNTER that they are non-compliant and must not make reference to COUNTER or use the COUNTER logo. |
Content providers previously audited:
3 months following the due audit date |
A reminder from COUNTER that an auditor’s report is required |
2 months following the due audit date |
A further reminder from COUNTER that an auditor’s report is required |
5 months following the due audit date |
A final reminder from the Chair of the COUNTER Executive Committee that an auditor’s report is required |
6 months following the due audit date |
The content provider is removed from the registry and is notified by COUNTER that they are non-compliant and must not make reference to COUNTER or use the COUNTER logo. |
2. COUNTER Usage Reports for which an Independent Audit is Required¶
Independent audits are required for COUNTER reports according to host type(s). See Table 1 (below).
Table 1: COUNTER Reports Requiring Audit
Category |
Report ID (for SUSHI) |
R5 Report Name |
Master Report / Standard View |
Host Type |
---|---|---|---|---|
Platform |
PR |
Platform Master Report |
Master |
All |
Platform |
PR_P1 |
Platform Usage |
Standard View |
All |
Database |
DR |
Database Master Report |
Master |
- Aggregated Full Content |
Database |
DR_D1 |
Database Searches and Item Usage |
Standard View |
- Aggregated Full Content |
Database |
DR_D2 |
Database Access Denied |
Standard View |
- Aggregated Full Content |
Title |
TR |
Title Master Report |
Master |
- Aggregated Full Content |
Title |
TR_B1 |
Book Requests (excluding OA_Gold) |
Standard View |
- Aggregated Full Content |
Title |
TR_B2 |
Book Access Denied |
Standard View |
- eBooks |
Title |
TR_B3 |
Book Usage by Access Type |
Standard View |
- Aggregated Full Content |
Title |
TR_J1 |
Journal Requests (excluding OA_Gold) |
Standard View |
- Aggregated Full Content |
Title |
TR_J2 |
Journal Access Denied |
Standard View |
- eJournals |
Title |
TR_J3 |
Journal Usage by Access Type |
Standard View |
- Aggregated Full Content |
Title |
TR_J4 |
Journal Requests by YOP (excluding OA_Gold) |
Standard View |
- Aggregated Full Content |
Item |
IR |
Item Master Report |
Master |
- Data Repository |
Item |
IR_A1 |
Journal Article Requests |
Standard View |
- Repository |
Item |
IR_M1 |
Multimedia Item Requests |
Standard View |
- Multimedia Collection |
3. General Conditions for Carrying out an Audit Test¶
COUNTER defines a reporting period as a calendar month. A report run for any given month MUST reflect all activity of a customer for the entire month in question.
This applies also to auditing activity. An auditor should always finalize the audit tests within one and the same calendar month. During the audit period, all activity on the audit accounts not instigated by the auditor should be prevented, as this will make the test reports unreliable and may result in further audit tests that may incur additional costs.
To prevent any collision of reported data, an auditor should be allowed to set up and maintain separate accounts for each of the audit tests. All accounts should be set up in such a way that only the auditor carrying out a test can access the content provider’s platform.
Prior to the audit, the content provider must supply to the auditor:
Account details for at least 4 separate accounts with access to all areas required to be tested (or specific restrictions for turn-away testing).
Links to download usage reports in all required formats. COUNTER usage reports must be provided as tabular versions, which can be easily imported into Microsoft Excel pivot tables.
SUSHI credentials for the test accounts to enable verification of SUSHI harvesting and formatting of the harvested reports.
A declaration that federated and automated searches have been disaggregated from any searches reported. See the COUNTER Code of Practice for further information on the protocols that apply to federated and automated searches.
If server-side caching is implemented, information on cache settings used should be provided. Note: Server-side caching can cause a discrepancy between the usage recorded in the audit tests and the usage reported by the content provider. Information on cache settings enables the auditor to take them into account when evaluating the results of the report tests. If the content provider does not provide this information, the auditor is likely to require further audit tests that may incur additional costs.
E.2 The Required Audit Outputs¶
If the auditor identifies one or more issues, the content provider MUST resolve them and pass the audit within 3 months to maintain COUNTER-compliant status. Please see Section 9.2 in the COUNTER Code of Practice.
The auditor will provide to the COUNTER Executive Committee a summary report including, as a minimum, the following information:
The name of the content provider
The audit period and date
The usage report(s) tested
For each usage report tested, the test results, indicated as a % of the reported figures over the expected
A summary of any material issues noted with the format/structure, data integrity, and/or delivery of the content provider’s reports. If there are no issues, a PASS should be noted.
A clear indication of the outcome of the audit: PASS, QUALIFIED PASS, or FAIL.
Any other comments that relate to the audit and are worthy of consideration by the COUNTER Executive Committee.
Sample Audit Report:
Content Provider |
<name> |
|
||||||
Audit Period |
<mmm/yyyy> |
Date |
<mmm/yyyy> |
|||||
Report |
Usage Activity Result |
Report Format |
Data Integrity |
Delivery |
Opinion |
Comments |
||
Tabular |
SUSHI |
Reports Interface |
SUSHI Server |
|||||
TR_J1 |
100% |
PASS |
PASS |
PASS |
PASS |
PASS |
PASS |
|
TR_B1 |
112% |
PASS |
REPORT TOTALS included |
PASS |
PASS |
PASS |
FAIL |
SUSHI versions of reports must not have totals. |
A content provider may need to submit multiple audit reports, some of which may PASS and some of which may FAIL. The results of each report’s tests should be submitted on a separate line. For a content provider to maintain COUNTER-compliant status, each audited report must PASS.
E.3 The Required Audit Tests¶
Stage 1. Report Format: Checking the report layout and file-format against the COUNTER Code of Practice¶
The auditor will confirm that each of the audit reports complies with the COUNTER Code of Practice.
The following items will be checked:
The layout of the report (headers/footers, number of fields, field sequence, totals field, and format of reported numbers)
The conformity of identifiers to the required standard (e.g. ISSNs must be provided as nine digits, with a hyphen as the middle digit)
The presence of all required file formats (a Microsoft Excel file, a tab-separated value (TSV) file, or a file that can be easily imported into Microsoft Excel)
That email alerts are set to report usage reports updated in a timely manner
Flexibility in the reporting period so customers can specify the start and end months of data reported in the COUNTER reports
That COUNTER usage reports are available in JSON format in accordance with the COUNTER JSON schema specified by SUSHI. (Schema may be found on the NISO/SUSHI website at: http://www.niso.org/schemas/sushi/)
That COUNTER schema covers all the COUNTER usage reports.
That the JSON formatted report produced via SUSHI matches the total of the relevant usage counted on the equivalent .tsv/Excel report offered by the content provider, i.e. A report should produce the same results irrespective of the format in which it is delivered.
Stage 2. Data Integrity: Checking the usage numbers as reported¶
The audit-test must be conducted in such a way that the auditor’s activities during the audit-test can be isolated from other activities on the content provider’s site. Depending on the site being tested, the auditor must conduct the audit-test from a computer with a unique IP address and/or using a unique account number.
The auditor must accept user/machine and session cookies when prompted.
Platform Reports¶
The Platform Master Report will be COUNTER-compliant if the following Standard View passes the COUNTER Audit and the figures reported within it are matching what is reported in the Master Report.
Platform Usage: A Standard View of the Platform Master Report offering platform-level usage summarized by Metric_Type.
An audit of this Standard View requires the following:
The auditor must have access to all databases as made available on the platform of the content provider.
Audit-test P1-1: Searches_Platform
Option 1: Platform has multiple databases, and it is possible to search over all databases, selected subset of databases, or a single database.
The auditor must run 100 searches on the platform, including 50 searches against only 1 selected database, 25 against 2 selected databases, and 25 against all databases. Each of these searches must report 1 Searches_Platform in the PR_P1 Standard View.
Option 2: Platform has multiple databases, and it is possible to search over all databases or a single database.
The auditor must run 100 searches on the platform, including 50 searches against only 1 selected database and 50 against all databases. Each of these searches must report 1 Searches_Platform in the PR_P1 Standard View.
Option 3: Platform has a single database.
The auditor must run 50 searches on the platform, with all 50 searches run against the 1 database. Each of these searches must report 1 Searches_Platform in the PR_P1 Standard View.
All searches, including those returning 0 results, must be reported as a Searches_Platform in the PR_P1 Standard View.
The auditor must allow at least 31 seconds between each search.
Each time a search is conducted, the auditor will record the search term, the database searched, and the number of results returned.
A content provider will pass this audit test when the sum of the searches reported by the content provider in PR_P1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the searches on the auditor’s report.
Audit-test P1-2: Total_Item_Requests, Unique_Item_Requests and Unique_Title_Requests
Option 1: Platform has multiple databases that include titles.
The auditor must make a total of 100 requests on a subset of the unique items made available to them, including 50 requests against items not within titles (if available) and 50 requests against items within titles (if available).
If the platform does not have content that is within a title, then all 100 requests must be made to items within titles.
Each title must have 5 Items requested within it (reporting 5 Total_Item_Requests, 5 Unique_Item_Requests, and 1 Unique_Title_Requests).
It may not be possible to know which Title the Item being requested belongs to prior to the delivery of the Item. In this case, the titles containing the Item must be noted by the auditor upon request. This must result in 100 Total_Item_Requests and 100 Unique_Item_Requests in the PR_P1 Standard View.
The Unique_Title_Requests being reported in the PR_P1 Standard View will be determined by the number of unique titles noted by the auditor during the testing.
Option 2: Platform has multiple databases that do not include titles.
The auditor must make 100 requests on a subset of the unique items made available to them.
This must result in 100 Total_Item_Requests and 100 Unique_Item_Requests being reported in the PR_P1 Standard View. The number of Unique_Title_Requests being reported will be 0.
Option 3: Platform has a single database, which includes titles.
The auditor must make 50 requests on items made available to them, including 25 requests against items not within titles (if available) and 25 requests against Items within titles (if available).
If the platform does not have content that is within a Title, then all 50 requests must be made to Items within titles.
Each title must have 5 Items requested within it (reporting 5 Total_Item_Requests, 5 Unique_Item_Requests, and 1 Unique_Title_Requests).
It may not be possible to know which Title the Item being requested belongs to prior to the delivery of the Item. In this case, the titles containing the Item must be noted by the auditor upon request.
This must result in 50 Total_Item_Requests being reported in the PR_P1 Standard View.
The Unique_Title_Requests being reported in the PR_P1 Standard View will be determined by the number of unique titles noted by the auditor during the testing.
Option 4: Platform has a single database, which does not include titles.
The auditor must make 50 requests on items made available to them.
This must result in 50 Total_Item_Requests and 50 Unique_Item_Requests being reported in the PR_P1 Standard View. The number of Unique_Title_Requests being reported will be 0.
Multiple paths should be used to make the requests. When possible, 50% of items requested should be via browsing the platform and 50% via searching. If either browsing to items or accessing items via searching is not possible, then 100% of items requested can be requested via the only available option. The user may think they are browsing a list but are in fact triggering searches. For this reason, requests via browsing may deliver unexpected searches, however the end Item/Title will always be as expected.
The auditor must allow at least 31 seconds between each test.
A content provider will pass this audit test when the sum of the Total_Item_Requests and Unique_Item_Requests reported by the content provider in PR_P1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the searches on the auditor’s report.
Audit-test P1-3: Total_Item_Requests and Unique_Item_Requests 30-second filters
To ensure that the report is counting correctly as per the COUNTER Code of Practice, it is important that the browser cache settings of the machines used for testing are disabled. It is also important that the auditee confirms before the audit period whether or not they operate a cache server. If they do, this test will not report as the Code of Practice expects and is likely to under-report successive searches outside the double-click threshold.
The audit test consists of clicking links to an item twice in succession (double-clicks). If the two clicks occur within a 30-second time-span, only the second Total_Item_Requests must be recorded. If the two clicks occur with more than 30 seconds between them, then 2 Total_Item_Requests must be counted. In both cases only 1 Unique_Item_Requests will be reported.
The auditor must carry out a total of 30 tests on the platform, and each test will consist of 2 requests. There are 2 types of tests that must be carried out:
“Inside” tests (Two identical requests are made, and the second request is made within 30 seconds of the first).
“Outside” tests (Two identical requests are made, and the second request is made more than 30 seconds after the first).
The auditor must carry out 15 inside tests.
This must result in 15 Total_Item_Requests and 15 Unique_Item_Requests in the PR_P1 Standard View.
This may not be the case if the content provider operates a cache server.
The audit must carry out 15 outside tests.
This must result in 30 Total_Item_Requests and 15 Unique_Item_Requests in the PR_P1 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must allow at least 31 seconds between each of the 30 tests.
A content provider will pass this audit test when the sum of the Total_Item_Requests and Unique_Item_Requests reported by the content provider in PR_P1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests and Unique_Item_Requests on the auditor’s report.
Audit tests P1-1, P1-2 and P1-3 must take place in separate accounts so that each audit-test can be separately reported.
Database Reports¶
The Database Master Report will be COUNTER-compliant if the following Standard Views pass the COUNTER audits and the figures reported within them match what is reported in the Master Report.
Any Standard View that is not applicable to the content provider does not require auditing. This must be agreed prior to the audit by COUNTER.
Databases Searches and Item Usage: Reports on key search and request metrics needed to evaluate a database.
An audit of this Standard View requires the following:
The auditor must have access to all databases available on the platform of the content provider.
Audit-test D1-1: Searches_Regular and Searches_Automated
Option 1: The content provider offers multiple databases, and it is possible to search over all databases, a selected subset of databases, or a single database.
The auditor must run 100 searches, including 50 against only 1 selected database, 25 against 2 selected databases, and 25 against all databases (without actively choosing).
Each of these searches on a single database must report 1 Searches_Regular in the DR_D1 Standard View.
Each of these searches over 2 databases must report 1 Searches_Regular against each of the selected databases in the DR_D1 Standard View.
Each of these searches over all databases must report 1 Searches_Automated against each of the databases offered by the content provider in the DR_D1 Standard View.
Option 2: The content provider has multiple databases, and it is possible to search over all databases or a single database.
The auditor must run 100 searches, including 50 against only 1 selected database and 50 against all databases (without actively choosing).
Each of these searches on a single database must report 1 Searches_Regular in the DR_D1 Standard View.
Each of these searches over all databases must report 1 Searches_Automated against each of the databases offered by the content provider in the DR_D1 Standard View.
Option 3: The content provider has a single database.
The auditor must run 50 searches against the 1 database. Each of these searches must report 1 Searches_Regular in the DR_D1 Standard View.
All searches, including those returning 0 results, must be reported as a Searches_Platform in the DR_D1 Standard View.
The auditor must allow at least 31 seconds between each search.
Each time a search is conducted, the auditor will record the search term, the database searched, and the number of results returned.
A content provider will pass this audit test when the sum of the searches reported by the content provider in DR_D1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the searches on the auditor’s report.
Audit-test D1-2: Total_Item_Requests
The auditor must make 100 requests on a subset of unique Items made available.
This must result in 100 Total_Item_Requests reported in the DR_D1 Standard View.
Multiple paths should be used to make the requests. When possible, 50% of items requested should be via browsing and 50% via searching. If either browsing to items or accessing items via searching is not possible, then 100% of items requested can be requested via the only available option. The user may think they are browsing a list but in fact be triggering searches. For this reason, making requests via browsing may deliver unexpected searches, however the end Item/Title will always be as expected.
The auditor must allow at least 31 seconds between each test.
A content provider will pass this audit test when the sum of the Total_Item_Requests reported by the content provider in DR_D1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests on the auditor’s report.
Audit-test D1-3: Total_Item_Requests 30-second filters
To ensure that the report is counting correctly as per the COUNTER Code of Practice, it is important that the browser cache settings of the machines used for testing are disabled. It is also important that the auditee confirms before the audit period whether or not they operate a cache server. If they do, this test will not report as the Code of Practice expects and is likely to under-report successive searches outside the double-click threshold.
The audit-test consists of making an Item Request twice in succession (double-clicks). If the two clicks occur within a 30-second time-span, only the second Total_Item_Requests must be recorded. If the two clicks occur with more than 30 seconds between them, then 2 Total_Item_Requests must be counted.
The auditor must carry out a total of 30 tests, and each test will consist of 2 requests. There are 2 types of tests that must be carried out:
“Inside” tests (The 2 requests are made to the same item, and the second request is made within 30 seconds of the first).
“Outside” tests (The 2 requests are made to the same item, and the second request is made more than 30 seconds after the first).
The auditor must carry out 15 inside tests.
This must result in 15 Total_Item_Requests being reported in the DR_D1 Standard View.
The audit must carry out 15 outside tests.
This must result in 30 Total_Item_Requests being reported in the DR_D1 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must allow at least 31 seconds between each of the 30 tests.
A content provider will pass this audit test when the sum of the Total_Item_Requests reported by the content provider in DR_D1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests on the auditor’s report.
Audit-test D1-4: Total_Item_Investigations
IMPORTANT NOTE: This test does not need to be carried out where the content provider does not offer Investigations that are not also Requests. This must be declared to the auditor and the COUNTER Executive Committee prior to testing.
The auditor must make 100 Investigations on a subset of unique Items made available to them.
This must result in 100 Total_Item_Investigations.
Multiple paths should be used to make the Investigations. When possible, 50% of Items Investigations should be via browsing and 50% via searching. If either browsing to item investigations or accessing item investigations via searching is not possible, then 100% of item investigations can be made via the only available option. The user may think they are browsing a list, but in fact be triggering searches. For this reason, investigations made via browsing may deliver unexpected searches, however the end Investigation will always be as expected.
The auditor must allow at least 31 seconds between each test.
A content provider will pass this audit test when the sum of the Total_Item_Investigations reported by the content provider in DR_D1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Investigations on the auditor’s report.
Audit-test D1-5: Total_Item_Investigations 30-second filters
IMPORTANT NOTE: This test does not need to be carried out where the Content provider does not offer Investigations that are not also Requests. This must be declared to the auditor and the COUNTER Executive Committee prior to testing.
To ensure that the report is counting correctly as per the COUNTER Code of Practice, it is important that the browser cache settings of the machines used for testing are disabled. It is also important that the auditee confirms before the audit period if they operate a cache server. If they do, this test will not report as the Code of Practice expects and is likely to under-report successive searches outside the double-click threshold.
The audit-test consists of making an Item Investigation twice in succession (double-clicks). If the two clicks occur within a 30-second time-span, only the second Total_Item_Investigations made must be recorded. If the two clicks occur with more than 30 seconds between, then 2 Total_Item_Investigations must be counted.
The auditor must carry out a total of 30 tests, and each test will consist of 2 item investigations. There are 2 types of tests that must be carried out:
“Inside” tests (Two item investigations are made to the same item the second Item Investigation is made within 30 seconds of the first).
“Outside” tests (Two item investigations are made to the same item, and the second item investigation is made more than 30 seconds after the first).
The auditor must carry out 15 inside tests.
This must result in 15 Total_Item_Investigations being reported in the DR_D1 Standard View.
The audit must carry out 15 outside tests.
This must result in 30 Total_Item_Investigations being reported in the DR_D1 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must allow at least 31 seconds between each of the 30 tests.
A content provider will pass this audit test when the sum of the Total_Item_Requests and Total_Item_Investigations reported by the Content provider in DR_D1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests and Total_Item_Investigations on the auditor’s report.
If the content provider does not offer investigations that are not also requests, the following figure being reported as a result of the D3-1 and D3-2 audit tests must match in the DR_D1 Standard View:
Total_Item_Requests must match Total_Item_Investigations
Audit tests D1-1, D1-2 and D1-3, D1-4 and DB1-5 must take place in separate accounts so that each audit test can be separately reported.
Databases Access Denied: Reports on access-denied activity for databases where users were denied access because simultaneous-user licenses were exceeded, or their institution did not have a license for the database.
An audit of this Standard View requires the following:
Audit-test D2-1: Limit_Exceeded
IMPORTANT NOTE: This test cannot be carried out if the content provider does not offer a concurrent/simultaneous user limit. This must be declared to the auditor and the COUNTER Executive Committee prior to testing.
The account used for this testing must have concurrent/simultaneous-user limit set, and the number of registered users concurrently allowed must be declared by the content provider prior to the testing. Ideally the account should allow a single active user on the site requesting access to the database. This means that a second user accessing the database would be turned away.
Option 1: The content provider turns the user away when the concurrent/simultaneous-user limit is exceeded upon login.
The auditor will log into the site. This means that the user limit is at maximum active users.
The auditor will then attempt to log into the site using a different computer. The auditor should then be refused access because of exceeding the concurrent/simultaneous-user limit. Each time access is refused, the auditor will record this as Limit_Exceeded.
The auditor must force 50 Limit_Exceeded turnaways during testing.
Each of these concurrent/simultaneous turnaways must report 1 Limit_Exceeded in the DR_D2 Standard View.
Option 2: The content provider turns the user away when the concurrent/simultaneous user limit is exceeded upon searching or accessing a database.
The auditor will log into the site. This means that the user limit is at maximum active users. The user will then select and make a search on a database (or browse to a database).
The auditor will then log into the site using a different computer. The auditor will then repeat the action made on the previous computer (select and make a search on a database or browse to a database). After the search has been made (or database browsed to) the user should then be refused access because of exceeding the concurrent/simultaneous-user limit. Each time access is refused, the auditor will record this as Limit_Exceeded.
The auditor must force 50 Limit_Exceeded turnaways during testing.
Each of these concurrent/simultaneous turnaways must report 1 Limit_Exceeded in the DR_D2 Standard View.
Option 3: The content provider turns the user away when the concurrent/simultaneous-user limit is exceeded upon accessing an Item within a database.
The auditor will log into the site. This means that the user limit is at maximum active users. The user will then navigate to and request an Item.
The auditor will then log into the site using a different computer. The auditor will then repeat the action made on the previous computer (navigate to and request an Item). After the Item has been requested the user should then be refused access because of exceeding the concurrent/simultaneous-user limit. Each time access is refused, the auditor will record this as Limit_Exceeded.
The auditor must force 50 Limit_Exceeded turnaways during testing.
Each of these concurrent/simultaneous turnaways must report 1 Limit_Exceeded in the DR_D2 Standard View.
The auditor must allow at least 31 seconds between each search.
Each time a turnaway is made, the auditor will record the database on which the turnaway was produced. (In the case of turning away at log in, the database will be All).
A content provider will pass this audit test when the sum of the turnaways reported by the content provider in DR_D2 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the turnaways on the auditor’s report.
Audit-test D2-2: No_License
IMPORTANT NOTE: This test cannot be carried out if the content provider does not restrict site content or if restricted content is not displayed. This must be declared to the auditor and the COUNTER Executive Committee prior to testing.
The account used for this testing must have restricted access to content, and the content for which the user has no license to access must be declared by the content provider prior to the testing. Alternatively, the content provider may declare the content that the user does have license to access.
The auditor will attempt to access content to which the account being used does not have access. Each time access is refused, the auditor will record No_License.
The auditor must force 50 No_License turnaways during testing.
Each of these “No License” turnaways must report 1 No_License in the DR_D2 Standard View.
The auditor must allow at least 31 seconds between each search.
Each time a turnaway is made, the auditor will record the database on which the turnaway was produced.
A content provider will pass this audit test when the sum of the turnaways reported by the content provider in DR_D2 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the turnaways on the auditor’s report.
Audit tests D2-1 and D2-2 must take place in separate accounts so that each audit test can be separately reported.
Title Reports¶
The Title Master Report will be COUNTER-compliant if the following Standard Views pass the COUNTER audits and the figures reported within them match what is reported in the Master Report.
Any Standard View that is not applicable to the content provider does not require auditing, this must be agreed prior to the audit by COUNTER.
Book Requests (excluding OA_Gold): Reports on full-text activity for non-Gold open access books as Total_Item_Requests and Unique_Title_Requests. The Unique_Title_Requests view provides comparable usage across book platforms. The Total_Item_Requests view shows overall activity; however, numbers between sites will vary significantly based on how the content is delivered (e.g. delivered as a complete book or by chapter.)
An audit of this Standard View requires the following:
The auditor must have access to all book content available by the content provider.
The Access_Type for all requests must be Controlled and not OA_Gold.
Audit-test B1-1: Total_Item_Requests and Unique_Title_Requests
The auditor must make a total of 100 requests on a subset of unique Items within book titles.
Each title must have 5 Items requested within it (reporting 5 Total_Item_Requests and 1 Unique_Title_Requests).
This must result in 100 Total_Item_Requests being reported in the TR_B1 Standard View.
This must result in 20 Unique_Title_Requests being reported in the TR_B1 Standard View.
The auditor must allow at least 31 seconds between each test.
A content provider will pass this audit test when the sum of the Total_Item_Requests and Unique_Title_Requests reported by the content provider in TR_B1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests and Unique_Title_Requests on the auditor’s report.
Audit-test B1-2: Total_Item_Requests and Unique_Title_Requests 30-second filters
To ensure that the report is counting correctly as per the COUNTER Code of Practice, it is important that the browser cache settings of the machines used for testing are disabled. It is also important that the auditee confirms before the audit period whether or not they operate a cache server. If they do, this test will not report as the Code of Practice expects and is likely to under-report successive searches outside the double-click threshold.
The audit test consists of clicking links to an Item within a book title twice in succession (double-clicks). If the two clicks occur within a 30-second time-span, only the second Total_Item_Requests must be recorded. If the two clicks occur with more than 30 seconds between, then 2 Total_Item_Requests must be counted. In both cases only 1 Unique_Title_Requests will be reported.
The auditor must carry out a total of 32 test, and each test will consist of 2 requests. There are 2 types of tests that must be carried out:
“Inside” tests (Two requests are made to the same Item, and the second request is made within 30 seconds of the first).
“Outside” tests (Two requests are made to the same Item and the second request is made more than 30 seconds after the first).
The auditor must carry out 16 inside tests.
Where possible, each title must have 2 Item tests within it (reporting 1 Total_Item_Requests and 1 Unique_Title_Requests).
This must result in 16 Total_Item_Requests and 8 Unique_Title_Requests in the TR_B1 Standard View.
This may not be the case if the content provider operates a cache server.
The audit must carry out 16 outside tests.
Where possible, each title must have 2 Items requested within it (reporting 2 Total_Item_Requests and 1 Unique_Title_Requests).
This must result in 30 Total_Item_Requests and 8 Unique_Title_Requests in the TR_B1 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must allow at least 32 seconds between each of the 30 tests.
A Content provider will pass this audit test when the sum of the Total_Item_Requests and Unique_Title_Requests reported by the Content provider in TR_B1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests and Unique_Title_Requests on the auditor’s report.
Audit tests B1-1 and B1-2 must take place in separate accounts so that each audit test can be separately reported.
Book Access Denied: Reports on access denied activity for books where users were denied access because simultaneous-user licenses were exceeded, or their institution did not have a license for the book.
An audit of this Standard View requires the following:
Audit-test B2-1: Limit_Exceeded
IMPORTANT NOTE: This test cannot be carried out if the content provider does not offer a concurrent/simultaneous user limit. This must be declared to the auditor and the COUNTER Executive Committee prior to testing.
The account used for this testing must have concurrent/simultaneous-user limit set for book title/items and the number of registered users concurrently allowed must be declared by the content provider prior to the testing. Ideally the account should allow a single active user to access books. (This means that a second user accessing books will be turned away).
The content provider turns the user away when the concurrent/simultaneous-user limit is exceeded for books.
The auditor will log into the site and access a book item, this means that the user limit is at maximum active users.
The auditor will then log into the site using a different computer. The auditor will then repeat the action made on the previous computer (access a book item). After the item has been requested the user should then be refused access because of exceeding the concurrent/simultaneous user limit. Each time access is refused, the auditor will record this as Limit_Exceeded.
The auditor must force 50 Limit_Exceeded turnaways during testing.
Each of these concurrent/simultaneous turnaways must report 1 Limit_Exceeded in the TR_B2 Standard View.
The auditor must allow at least 31 seconds between each request.
A content provider will pass this audit test when the sum of the Limit_Exceeded turnaways reported by the content provider in TR_B2 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Limit_Exceeded turnaways on the auditor’s report.
Audit-test B2-2: No_License
IMPORTANT NOTE: This test cannot be carried out if the content provider does not restrict site content or where restricted content is not displayed. This must be declared to the auditor and the COUNTER Executive Committee prior to testing.
The account used for this testing must have restricted access to book content, and the book content that the user has no license to access must be declared by the content provider prior to the testing. Alternatively, the content provider may declare the content to which the user does have license to access.
The auditor will attempt to access book content that the account being used does not have access to. Each time access is refused, the auditor will record No_License.
The auditor must force 50 No_License during testing.
Each of these Book content not licensed turnaways must report 1 No_License in the TR_B2 Standard View.
The auditor must allow at least 31 seconds between each search.
A content provider will pass this audit test when the sum of the No_License turnaways reported by the content provider in TR_B2 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the No_License turnaways on the auditor’s report.
Audit tests B2-1 and B2-2 must take place in separate accounts so that each audit test can be separately reported.
Book Usage by Access Type: Reports on book usage showing all applicable metric types broken down by Access_Type
An audit of this Standard View requires the following:
The auditor must have access to all book content available by the content provider.
Audit-test B3-1: Total_Item_Requests, Unique_Item_Requests and Unique_Title_Requests
Option 1: content provider offers OA_Gold Items in addition to Controlled.
The auditor must make a total of 100 requests on a subset of unique Items within book titles (50 requests to book Items where the Access_Type is Controlled, and 50 requests to book items where the Access_Type is OA_Gold).
Each title must have 5 items requested within it (reporting 5 Total_Item_Requests, 5 Unique_Item_Requests and 1 Unique_Title_Requests).
This must result in 50 OA_Gold Total_Item_Requests and 50 Controlled Total_Item_Requests being reported in the TR_B3 Standard View.
This must result in 50 OA_Gold Unique_Item_Requests and 50 Controlled Unique_Item_Requests being reported in the TR_B3 Standard View.
This must result in 10 OA_Gold Unique_Title_Requests and 10 Controlled Unique_Title_Requests being reported in the TR_B3 Standard View.
Option 2: Content provider does not offer OA_Gold items.
The auditor must make a total of 100 requests on a subset of unique Items within book titles.
Where possible, each title must have 5 items requested within it (reporting 5 Total_Item_Requests, 5 Unique_Item_Requests, and 1 Unique_Title_Requests).
This must result in 100 Controlled Total_Item_Requests being reported in the TR_B3 Standard View.
This must result in 100 Controlled Unique_Item_Requests being reported in the TR_B3 Standard View.
This must result in 20 Controlled Unique_Title_Requests being reported in the TR_B3 Standard View.
The auditor must allow at least 31 seconds between each test.
A content provider will pass this audit test when the sum of the Total_Item_Requests, Unique_Item_Requests, and Unique_Title_Requests reported by the content provider in TR_B3 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests, Unique_Item_Requests, and Unique_Title_Requests on the auditor’s report.
Audit-test B3-2: Total_Item_Requests, Unique_Item_Requests and Unique_Title_Requests 30-second filters
To ensure that the report is counting correctly as per the COUNTER Code of Practice, it is important that the browser cache settings of the machines used for testing are disabled. It is also important that the auditee confirms before the audit period whether or not they operate a cache server. If they do, this test will not report as the Code of Practice expects and is likely to under-report successive searches outside the double-click threshold.
The audit-test consists of clicking links to an Item within a book title twice in succession (double-clicks). If the two clicks occur within a 30-second time-span, only the second Total_Item_Requests must be recorded. If the two clicks occur with more than 30 seconds between, then 2 Total_Item_Requests must be counted. In both cases only 1 Unique_Item_Requests and Unique_Title_Requests will be reported.
Option 1: Content provider offers OA_Gold items in addition to Controlled items.
The auditor must carry out a total of 32 tests, and each test will consist of 2 requests. There are 2 types of tests that must be carried out:
“Inside” tests (Two requests are made to the same book item, and the second request is made within 30 seconds of the first).
“Outside” tests (Two requests are made to the same book item, and the second request is made over 30 seconds after the first).
The auditor must carry out 16 inside tests (8 tests to book items where the Access_Type is Controlled and 8 tests to book items where the Access_Type is OA_Gold).
Where possible, each title must have 2 book item tests within it (reporting 2 Total_Item_Requests, 2 Unique_Item_Requests, and 1 Unique_Title_Requests).
This must result in 8 Controlled Total_Item_Requests and 8 OA_Gold Total_Item_Requests in the TR_B3 Standard View.
This must result in 8 Controlled Unique_Item_Requests and 8 OA_Gold Unique_Item_Requests in the TR_B3 Standard View.
This must result in 4 Controlled Unique_Title_Requests and 4 OA_Gold Unique_Title_Requests in the TR_B3 Standard View.
(This may not be the case if the content provider operates a cache server.)
The auditor must carry out 16 outside tests (8 tests to book items where the Access_Type is Controlled and 8 tests to book items where the Access_Type is OA_Gold).
Where possible, each title must have 2 book item tests within it (reporting 4 Total_Item_Requests, 2 Unique_Item_Requests, and 1 Unique_Title_Requests).
This must result in 16 Controlled Total_Item_Requests and 16 OA_Gold Total_Item_Requests in the TR_B3 Standard View.
This must result in 8 Controlled Unique_Item_Requests and 8 OA_Gold Unique_Item_Requests in the TR_B3 Standard View.
This must result in 4 Controlled Unique_Title_Requests and 4 OA_Gold Unique_Title_Requests in the TR_B3 Standard View.
(This may not be the case if the content provider operates a cache server.)
Option 2: Content provider does not offer OA_Gold Items.
The auditor must carry out a total of 32 tests, and each test will consist of 2 requests. There are 2 types of tests that must be carried out:
“Inside” tests (Two requests are made to the same book item and the second request is made within 30 seconds of the first).
“Outside” tests (Two requests are made to the same book item, and the second request is made over 30 seconds after the first).
The auditor must carry out 16 inside tests.
Where possible, each title must have 2 book item tests within it (reporting 2 Total_Item_Requests and 2 Unique_Item_Requests and 1 Unique_Title_Requests).
This must result in 16 Controlled Total_Item_Requests in the TR_B3 Standard View.
This must result in 16 Controlled Unique_Item_Requests in the TR_B3 Standard View.
This must result in 8 Controlled Unique_Title_Requests in the TR_B3 Standard View.
(This may not be the case if the Content provider operates a cache server.)
The auditor must carry out 16 outside tests.
Each title must have 2 book item tests within it (reporting 4 Total_Item_Requests, 2 Unique_Item_Requests, and 1 Unique_Title_Requests).
This must result in 32 Controlled Total_Item_Requests in the TR_B3 Standard View.
This must result in 16 Controlled Unique_Item_Requests in the TR_B3 Standard View.
This must result in 8 Controlled Unique_Title_Requests in the TR_B3 Standard View.
(This may not be the case if the content provider operates a cache server.)
The auditor must allow at least 31 seconds between each of the 30 tests.
A content provider will pass this audit test when the sum of the Total_Item_Requests, Unique_Item_Requests, and Unique_Title_Requests reported by the content provider in TR_B3 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests, Unique_Item_Requests, and Unique_Title_Requests on the auditor’s report.
Audit-test B3-3: Total_Item_Investigations, Unique_Item_Investigations, and Unique_Title_Investigations
IMPORTANT NOTE: This test does not need to be carried out if the content provider does not offer investigations that are not also requests. This must be declared to the auditor and the COUNTER Executive Committee prior to testing.
Option 1: Content provider offers OA_Gold Items in addition to Controlled.
The auditor must make a total of 50 item investigations within a subset of book titles (25 Investigations of items within a book where the Access_Type is Controlled, and 25 investigations of items within a book where the Access_Type is OA_Gold).
Each title must have 5 investigations to unique Items within it (reporting 5 Total_Item_Investigations, 5 Unique_Item_Investigations, and 1 Unique_Title_Investigations).
This must result in 25 OA_Gold Total_Item_Investigations and 25 Controlled Total_Item_Investigations being reported in the TR_B3 Standard View.
This must result in 25 OA_Gold Unique_Item_Investigations and 25 Controlled Unique_Item_Investigations being reported in the TR_B3 Standard View.
This must result in 5 OA_Gold Unique_Title_Investigations and 5 Controlled Unique_Title_Investigations being reported in the TR_B3 Standard View.
Option 2: Content provider does not offer OA_Gold Items.
The auditor must make a total of 50 Investigations within a subset of book titles.
Each title must have 5 investigations to unique items within it (reporting 5 Total_Item_Investigations, 5 Unique_Item_Investigations, and 1 Unique_Title_Investigations).
This must result in 50 Controlled Total_Item_Investigations being reported in the TR_B3 Standard View.
This must result in 50 Controlled Unique_Item_Investigations being reported in the TR_B3 Standard View.
This must result in 10 Controlled Unique_Title_Investigations being reported in the TR_B3 Standard View.
The auditor must allow at least 31 seconds between each test.
A content provider will pass this audit test when the sum of the Total_Item_Investigations, Unique_Item_Investigations, and Unique_Title_Investigations reported by the content provider in TR_B3 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Investigations, Unique_Item_Investigations, and Unique_Title_Investigations on the auditor’s report.
Audit test B3-4: Total_Item_Investigations, Unique_Item_Investigations, and Unique_Title_Investigations 30-second filters
IMPORTANT NOTE: This test does not need to be carried out if the content provider does not offer investigations that are not also requests. This must be declared to the auditor and the COUNTER Executive Committee prior to testing.
To ensure that the report is counting correctly as per the COUNTER Code of Practice, it is important that the browser cache settings of the machines used for testing are disabled. It is also important that the auditee confirms before the audit period whether or not they operate a cache server. If they do, this test will not report as the Code of Practice expects and is likely to under-report successive searches outside the double-click threshold.
The audit test consists of clicking links to an investigation of an item within a book title twice in succession (double-clicks). If the two clicks occur within a 30-second time-span, only the second Total_Item_Investigations must be recorded. If the two clicks occur with more than 30 seconds between, then 2 Total_Item_Investigations must be counted. In both cases only 1 Unique_Item_Investigations and Unique_Title_Investigations will be reported.
Option 1: Content provider offers OA_Gold Items in addition to Controlled.
The auditor must carry out a total of 32 tests, and each test will consist of 2 item investigations. There are 2 types of tests that must be carried out:
“Inside” tests (Two investigations are made to the same book item, and the second investigation is made within 30 seconds of the first).
“Outside” tests (Two investigations are made to the same book item, and the second investigation is made more than 30 seconds after the first).
The auditor must carry out 16 inside tests (8 Investigations to book items where the Access_Type is Controlled and 8 investigations to book items where the Access_Type is OA_Gold).
Each title must have 2 book item tests within it (reporting 2 Total_Item_Investigations, 2 Unique_Item_Investigations, and 1 Unique_Item_Investigations).
This must result in 8 Controlled Total_Item_Investigations and 8 OA_Gold Total_Item_Investigations in the TR_B3 Standard View.
This must result in 8 Controlled Unique_Item_Investigations and 8 OA_Gold Unique_Item_Investigations in the TR_B3 Standard View.
This must result in 4 Controlled Unique_Title_Investigations and 4 OA_Gold Unique_Title_Investigations in the TR_B3 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must carry out 16 outside tests (8 tests to book items where the Access_Type is Controlled and 8 tests to book items where the Access_Type is OA_Gold).
Each title must have 2 book item tests within it (reporting 4 Total_Item_Investigations, 2 Unique_Item_Investigations, and 1 Unique_Title_Investigations).
This must result in 16 Controlled Total_Item_Investigations and 16 OA_Gold Total_Item_Investigations in the TR_B3 Standard View.
This must result in 8 Controlled Unique_Item_Investigations and 8 OA_Gold Unique_Item_Investigations in the TR_B3 Standard View.
This must result in 4 Controlled Unique_Title_Investigations and 4 OA_Gold Unique_Title_Investigations in the TR_B3 Standard View.
This may not be the case if the content provider operates a cache server.
Option 2: Content provider does not offer OA_Gold items.
The auditor must carry out a total of 32 tests, and each test will consist of 2 item investigations. There are 2 types of tests that must be carried out:
“Inside” tests (Two investigations are made to the same book item, and the second investigation is made within 30 seconds of the first).
“Outside” tests (Two investigations are made to the same book item, and the second investigation is made more than 30 seconds after the first).
The auditor must carry out 16 inside tests.
Each title must have 2 book item tests within it (reporting 2 Total_Item_Investigations, 2 Unique_Item_Investigations, and 1 Unique_Title_Investigations).
This must result in 16 Controlled Total_Item_Investigations in the TR_B3 Standard View.
This must result in 16 Controlled Unique_Item_Investigations in the TR_B3 Standard View.
This must result in 8 Controlled Unique_Title_Investigations in the TR_B3 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must carry out 16 outside tests.
Each title must have 2 book item tests within it (reporting 4 Total_Item_Investigations, 2 Unique_Item_Investigations, and 1 Unique_Title_Investigations).
This must result in 32 Controlled Total_Item_Investigations in the TR_B3 Standard View.
This must result in 16 Controlled Unique_Item_Investigations in the TR_B3 Standard View.
This must result in 8 Controlled Unique_Title_Investigations in the TR_B3 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must allow at least 31 seconds between each of the 30 tests.
A content provider will pass this audit test when the sum of the Total_Item_Investigations, Unique_Item_Investigations, and Unique_Title_Investigations reported by the content provider in TR_B3 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Investigations, Unique_Item_Investigations, and Unique_Title_Investigations on the auditor’s report.
If the content provider does not offer Investigations that are not also requests, the following figure being reported as a result of the B3-1 and B3-2 audit tests must match in the TR_B3 Standard View:
Total_Item_Requests must match Total_Item_Investigations
Unique_Item_Requests must match Unique_Item_Investigations
Unique_Title_Requests must match Unique_Title_Investigations
Audit tests B3-1, B3-2, B3-3, and B3-4 must take place in separate accounts so that each audit test can be separately reported.
Journal Requests (excluding OA_Gold): Reports on usage of non-Gold open access journal content as Total_Item_Requests and Unique_Item_Requests. The Unique_Item_Requests provides comparable usage across journal platform by reducing the inflationary effect that occurs when an HTML full text automatically displays and the user then accesses the PDF version. The Total_Item_Requests shows overall activity.
An audit of this Standard View requires the following:
The auditor must have access to all journal content available by the content provider.
The Access_Type for all requests must be Controlled and not OA_Gold.
Audit-test J1-1: Total_Item_Requests and Unique_Item_Requests
The auditor must make a total of 100 requests on a subset of unique Journal Items.
This must result in 100 Total_Item_Requests being reported in the TR_J1 Standard View.
This must result in 100 Unique_Item_Requests being reported in the TR_J1 Standard View.
The auditor must allow at least 31 seconds between each test.
A content provider will pass this audit test when the sum of the Total_Item_Requests and Unique_Item_Requests reported by the content provider in TR_J1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests and Unique_Item_Requests on the auditor’s report.
Audit-test J1-2: Total_Item_Requests and Unique_Item_Requests 30-second filters
To ensure that the report is counting correctly as per the COUNTER Code of Practice, it is important that the browser cache settings of the machines used for testing are disabled. It is also important that the auditee confirms before the audit period whether or not they operate a cache server. If they do, this test will not report as the Code of Practice expects and is likely to under-report successive searches outside the double-click threshold.
The audit-test consists of clicking links to a journal item twice in succession (double-clicks). If the two clicks occur within a 30-second time-span, only the second Total_Item_Requests must be recorded. If the two clicks occur with more than 30 seconds between, then 2 Total_Item_Requests must be counted. In both cases only 1 Unique_Item_Requests will be reported.
The auditor must carry out a total of 30 tests, and each test will consist of 2 requests. There are 2 types of tests that must be carried out:
“Inside” tests (Two requests are made to the same journal item, and the second request is made within 30 seconds of the first).
“Outside” tests (Two requests are made to the same journal item, and the second request is made over 30 seconds after the first).
The auditor must carry out 15 inside tests.
This must result in 15 Total_Item_Requests and 15 Unique_Item_Requests in the TR_J1 Standard View.
This may not be the case if the content provider operates a cache server.
The audit must carry out 15 outside tests.
This must result in 30 Total_Item_Requests and 15 Unique_Item_Requests in the TR_J1 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must allow at least 31 seconds between each of the 30 tests.
A content provider will pass this audit test when the sum of the Total_Item_Requests and Unique_Item_Requests reported by the content provider in TR_J1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests and Unique_Item_Requests on the auditor’s report.
Audit tests J1-1 and J1-2 must take place in separate accounts so that each audit test can be separately reported.
Journal Accessed Denied: Reports on Access Denied activity for journal content where users were denied access because simultaneous-user licenses were exceeded, or their institution did not have a license for the title.
An audit of this Standard View requires the following:
Audit-test J2-1: Limit_Exceeded
IMPORTANT NOTE: This test cannot be carried out where the content provider does not offer a concurrent/simultaneous-user limit. This must be declared to the auditor and the COUNTER Executive Committee prior to testing.
The account used for this testing must have a concurrent/simultaneous-user limit set for journal items, and the number of registered users concurrently allowed must be declared by the content provider prior to the testing. Ideally, the account should allow a single active user to access journals. This means that a second user accessing journals will be turned away.
The content provider turns the user away when the concurrent/simultaneous-user limit is exceeded for journals.
The auditor will log into the site and access a journal item. This means that the user limit is at maximum active users.
The auditor will then log into the site using a different computer. The auditor will then repeat the action made on the previous computer (access a journal item). After the Item has been requested, the user should then be refused access because of exceeding the concurrent/simultaneous-user limit. Each time access is refused, the auditor will record this as Limit_Exceeded.
The auditor must force 50 Limit_Exceeded turnaways during testing.
Each of these concurrent/simultaneous turnaways must report 1 Limit_Exceeded in the TR_J2 Standard View.
The auditor must allow at least 31 seconds between each request.
A content provider will pass this audit test when the sum of the Limit_Exceeded turnaways reported by the content provider in TR_J2 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Limit_Exceeded turnaways on the auditor’s report.
Audit-test J2-2: No_License
IMPORTANT NOTE: This test cannot be carried out if the content provider does not restrict site content or where restricted content is not displayed. This must be declared to the auditor and the COUNTER Executive Committee prior to testing.
The account used for this testing must have restricted access to journal content, and the journal content that the user has no license to access must be declared by the content provider prior to the testing. Alternatively, the content provider may declare the content that the user does have license to access.
The auditor will attempt to access journal content that the account being used does not have access to. Each time access is refused, the auditor will record No_License.
The auditor must force 50 No_License turnaways during testing.
Each of these journal content not licensed turnaways must report 1 No_License in the TR_J2 Standard View.
The auditor must allow at least 31 seconds between each search.
A content provider will pass this audit test when the sum of the No_License turnaways reported by the content provider in TR_J2 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the No_License turnaways on the auditor’s report.
Audit tests J2-1 and J2-2 must take place in separate accounts so that each audit test can be separately reported.
Journal Usage by Access Type: Reports on usage of journal content for all metric types broken down by access type.
An audit of this Standard View requires the following:
The auditor must have access to all journal content available by the content provider.
Audit-test J3-1: Total_Item_Requests and Unique_Item_Requests
Option 1: Content provider offers OA_Gold items in addition to Controlled.
The auditor must make a total of 100 requests on a subset of unique journal Items (50 requests to journal Items where the Access_Type is Controlled and 50 requests to journal items where the Access_Type is OA_Gold).
This must result in 50 OA_Gold Total_Item_Requests and 50 Controlled Total_Item_Requests being reported in the TR_J3 Standard View.
This must result in 50 OA_Gold Unique_Item_Requests and 50 Controlled Unique_Item_Requests being reported in the TR_J3 Standard View.
Option 2: Content provider does not offer OA_Gold Items.
The auditor must make a total of 100 requests on a subset of unique journal Items.
This must result in 100 Controlled Total_Item_Requests being reported in the TR_J3 Standard View.
This must result in 100 Controlled Unique_Item_Requests being reported in the TR_J3 Standard View.
The auditor must allow at least 31 seconds between each test.
A content provider will pass this audit test when the sum of the Total_Item_Requests and Unique_Item_Requests reported by the content provider in TR_J3 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests and Unique_Item_Requests on the auditor’s report.
Audit-test J3-2: Total_Item_Requests and Unique_Item_Requests 30-second filters
To ensure that the report is counting correctly as per the COUNTER Code of Practice, it is important that the browser cache settings of the machines used for testing are disabled. It is also important that the auditee confirms before the audit period whether or not they operate a cache server. If they do, this test will not report as the Code of Practice expects and is likely to under-report successive searches outside the double-click threshold.
The audit-test consists of clicking links to a journal item twice in succession (double-clicks). If the two clicks occur within a 30-second time-span, only the second Total_Item_Requests must be recorded. If the two clicks occur with more than 30 seconds between them, then 2 Total_Item_Requests must be counted. In both cases only 1 Unique_Item_Requests will be reported.
Option 1: Content provider offers OA_Gold Items in addition to Controlled.
The auditor must carry out a total of 30 tests, and each test will consist of 2 requests. There are 2 types of tests that must be carried out:
“Inside” tests (Two requests are made to the same journal item, and the second request is made within 30 seconds of the first).
“Outside” tests (Two requests are made to the same journal item and the second request is made over 30 seconds after the first).
The auditor must carry out 15 inside tests (8 tests to journal items where the Access_Type is Controlled and 7 tests to journal items where the Access_Type is OA_Gold).
This must result in 8 Controlled Total_Item_Requests and 7 OA_Gold Total_Item_Requests in the TR_J3 Standard View.
This must result in 8 Controlled Unique_Item_Requests and 7 OA_Gold Unique_Item_Requests in the TR_J3 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must carry out 15 outside tests (8 tests to journal items where the Access_Type is Controlled and 7 tests to journal items where the Access_Type is OA_Gold).
This must result in 16 Controlled Total_Item_Requests and 14 OA_Gold Total_Item_Requests in the TR_J3 Standard View.
This must result in 8 Controlled Unique_Item_Requests and 7 OA_Gold Unique_Item_Requests in the TR_J3 Standard View.
This may not be the case if the content provider operates a cache server.
Option 2: Content provider does not offer OA_Gold Items.
The auditor must carry out a total of 30 tests, and each test will consist of 2 requests. There are 2 types of tests that must be carried out:
“Inside” tests (Two requests are made to the same journal item, and the second request is made within 30 seconds of the first).
“Outside” tests (Two requests are made to the same journal item, and the second request is made more than 30 seconds after the first).
The auditor must carry out 15 inside tests.
This must result in 15 Controlled Total_Item_Requests in the TR_J3 Standard View.
This must result in 15 Controlled Unique_Item_Requests in the TR_J3 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must carry out 15 outside tests.
This must result in 30 Controlled Total_Item_Requests in the TR_J3 Standard View.
This must result in 15 Controlled Unique_Item_Requests in the TR_J3 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must allow at least 31 seconds between each of the 30 tests.
A content provider will pass this audit test when the sum of the Total_Item_Requests and Unique_Item_Requests reported by the content provider in TR_J3 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests and Unique_Item_Requests on the auditor’s report.
Audit-test J3-3: Total_Item_Investigations and Unique_Item_Investigations
Option 1: Content provider offers OA_Gold Items in addition to Controlled.
The auditor must make a total of 50 investigations to a subset of unique journal items (25 Investigations of journal items where the Access_Type is Controlled and 25 Investigations of journal items where the Access_Type is OA_Gold).
This must result in 25 OA_Gold Total_Item_Investigations and 25 Controlled Total_Item_Investigations being reported in the TR_J3 Standard View.
This must result in 25 OA_Gold Unique_Item_Investigations and 25 Controlled Unique_Item_Investigations being reported in the TR_J3 Standard View.
Option 2: Content provider does not offer OA_Gold Items.
The auditor must make a total of 50 investigations to a subset of unique Journal Items.
This must result in 50 Controlled Total_Item_Investigations being reported in the TR_J3 Standard View.
This must result in 50 Controlled Unique_Item_Investigations being reported in the TR_J3 Standard View.
The auditor must allow at least 31 seconds between each test.
A content provider will pass this audit test when the sum of the Total_Item_Investigations and Unique_Item_Investigations reported by the content provider in TR_J3 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Investigations and Unique_Item_Investigations on the auditor’s report.
Audit-test J3-4: Total_Item_Investigations and Unique_Item_Investigations 30-second filters
To ensure that the report is counting correctly as per the COUNTER Code of Practice, it is important that the browser cache settings of the machines used for testing are disabled. It is also important that the auditee confirms before the audit period whether or not they operate a cache server. If they do, this test will not report as the Code of Practice expects and is likely to under-report successive searches outside the double-click threshold.
The audit-test consists of clicking links to an Investigation of a Journal Item twice in succession (double-clicks). If the two clicks occur within a 30-second time-span, only the second Total_Item_Requests must be recorded. If the two clicks occur with more than 30 seconds between them, then 2 Total_Item_Requests must be counted. In both cases only 1 Unique_Item_Requests and Unique_Title_Requests will be reported.
Option 1: Content provider offers OA_Gold Items in addition to Controlled.
The auditor must carry out a total of 30 tests, and each test will consist of 2 Investigations. There are 2 types of tests that must be carried out:
“Inside” tests (Two investigations are made to the same journal item, and the second investigation is made within 30 seconds of the first).
“Outside” tests (Two investigations are made to the same journal item, and the second investigation is made more than 30 seconds after the first).
The auditor must carry out 15 inside tests (8 tests to journal items where the Access_Type is Controlled and 7 tests to journal items where the Access_Type is OA_Gold).
This must result in 8 Controlled Total_Item_Investigations and 7 OA_Gold Total_Item_Investigations in the TR_J3 Standard View.
This must result in 8 Controlled Unique_Item_Investigations and 7 OA_Gold Unique_Item_Investigations in the TR_J3 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must carry out 15 outside tests (8 tests to journal items where the Access_Type is Controlled and 7 tests to journal items where the Access_Type is OA_Gold).
This must result in 16 Controlled Total_Item_Investigations and 14 OA_Gold Total_Item_Investigations in the TR_J3 Standard View.
This must result in 8 Controlled Unique_Item_Investigations and 7 OA_Gold Unique_Item_Investigations in the TR_J3 Standard View.
This may not be the case if the content provider operates a cache server.
Option 2: Content provider does not offer OA_Gold Items.
The auditor must carry out a total of 30 tests, and each test will consist of 2 Investigations. There are 2 types of tests that must be carried out:
“Inside” tests (Two investigations are made to the same book item, and the second investigation is made within 30 seconds of the first).
“Outside” tests (Two investigations are made to the same book item, and the second investigation is more than 30 seconds after the first).
The auditor must carry out 15 inside tests.
This must result in 15 Controlled Total_Item_Investigations in the TR_J3 Standard View.
This must result in 15 Controlled Unique_Item_Investigations in the TR_J3 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must carry out 15 outside tests.
This must result in 30 Controlled Total_Item_Investigations in the TR_J3 Standard View.
This must result in 15 Controlled Unique_Item_Investigations in the TR_J3 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must allow at least 31 seconds between each of the 30 tests.
A content provider will pass this audit test when the sum of the Total_Item_Investigations and Unique_Item_Investigations reported by the content provider in TR_J3 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Investigations and Unique_Item_Investigations on the auditor’s report.
Audit tests J3-1, J3-2, J3-3, and J3-4 must take place in separate accounts so that each audit test can be separately reported.
Journal Requests by YOP (excluding OA_Gold): Breaks down the usage of non-Gold pen Access journal content by year of publication (YOP) providing counts for the metric types Total_Item_Requests and Unique_Item_Requests. Provides the details necessary to analyze usage of content in backfiles or covered by perpetual access agreement. Note: COUNTER reports do not provide access model or perpetual access rights details.
An audit of this Standard View requires the following:
The auditor must have access to all journal content available by the content provider.
The Access_Type for all requests must be Controlled and not OA_Gold.
The auditor must record the Year of Publication (YOP) of every item accessed during audit testing.
The auditor must ensure that some full-text articles from different years of the same journal are requested during the J4-1 and J4-2 tests. Hence, the auditor should know the numbers expected to appear against each Year of Publication (YOP) in the TR_J4 report.
Audit-test J4-1: Total_Item_Requests and Unique_Item_Requests
The auditor must make a total of 100 requests on a subset of unique Journal Items.
This must result in 100 Total_Item_Requests being reported in the TR_J4 Standard View.
This must result in 100 Unique_Item_Requests being reported in the TR_J4 Standard View.
The auditor must allow at least 31 seconds between each test.
A content provider will pass this audit test when the sum of the Total_Item_Requests and Unique_Item_Requests reported by the content provider in TR_J4 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests and Unique_Item_Requests on the auditor’s report.
The auditor must confirm the Year of Publication (YOP) of articles covered in J4-1 with appropriate and proportionate spot checks, unless the article is “YOP unknown”.
Audit-test J4-2: Total_Item_Requests and Unique_Item_Requests 30-second filters
To ensure that the report is counting correctly as per the COUNTER Code of Practice, it is important that the browser cache settings of the machines used for testing are disabled. It is also important that the auditee confirms before the audit period whether or not they operate a cache server. If they do, this test will not report as the Code of Practice expects and is likely to under-report successive searches outside the double-click threshold.
The audit-test consists of clicking links to a Journal Item twice in succession (double-clicks). If the two clicks occur within a 30-second time-span, only the second Total_Item_Requests must be recorded. If the two clicks occur with more than 30 seconds between, then 2 Total_Item_Requests must be counted. In both cases only 1 Unique_Item_Requests will be reported.
The auditor must carry out a total of 30 tests, and each test will consist of 2 requests. There are 2 types of tests that must be carried out:
“Inside” tests (Two Item requests are made to the same journal item and the second request is made within 30 seconds of the first).
“Outside” tests (Two item requests are made to the same journal item, and the second request is made more than 30 seconds after the first).
The auditor must carry out 15 inside tests.
This must result in 15 Total_Item_Requests and 15 Unique_Item_Requests in the TR_J4 Standard View.
This may not be the case if the content provider operates a cache server.
The audit must carry out 15 outside tests.
This must result in 30 Total_Item_Requests and 15 Unique_Item_Requests in the TR_J4 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must allow at least 31 seconds between each of the 30 tests.
A content provider will pass this audit test when the sum of the Total_Item_Requests and Unique_Item_Requests reported by the content provider in TR_J1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests and Unique_Item_Requests on the auditor’s report.
The auditor must confirm the Year of Publication (YOP) of articles covered in J4-2 with appropriate and proportionate spot checks, unless the article is “YOP unknown”.
Audit tests J4-1 and J4-2 must take place in separate accounts so that each audit test can be separately reported.
Item Reports¶
The Item Master Report will be COUNTER compliant if the following Standard Views pass the COUNTER audits and the figures reported within them match what is reported in the Master Report.
Any Standard View that is not applicable to the content provider does not require auditing. This must be agreed prior to the audit by COUNTER.
Reports on journal article requests at the article level. This report is limited to content with a Data_Type of Journal, Section_Type of article, and metric types of Total_Item_Requests.
An audit of this Standard View requires the following:
The auditor must have access to all journal article content available by the content provider.
Audit-test A1-1: Total_Item_Requests
The auditor must make a total of 100 requests on a subset of journal article Items.
This must result in 100 Total_Item_Requests being reported in the IR_A1 Standard View.
The auditor must allow at least 31 seconds between each test.
A content provider will pass this audit test when the sum of the Total_Item_Requests reported by the content provider in IR_A1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests on the auditor’s report.
Audit-test A1-2: Total_Item_Requests 30-second filters
To ensure that the report is counting correctly as per the COUNTER Code of Practice, it is important that the browser cache settings of the machines used for testing are disabled. It is also important that the auditee confirms before the audit period whether or not they operate a cache server. If they do, this test will not report as the Code of Practice expects and is likely to under-report successive searches outside the double-click threshold.
The audit-test consists of clicking links to a Journal Article Item twice in succession (double-clicks). If the two clicks occur within a 30-second time-span, only the second Total_Item_Requests must be recorded. If the two clicks occur with more than 30 seconds between, then 2 Total_Item_Requests must be counted.
The auditor must carry out a total of 30 tests, and each test will consist of 2 requests. There are 2 types of tests that must be carried out:
“Inside” tests (Two requests are made to the same journal article item, and the second request is made within 30 seconds of the first).
“Outside” tests (Two requests are made to the same journal article item, and the second request is made more than 30 seconds after the first).
The auditor must carry out 15 inside tests.
This must result in 15 Total_Item_Requests in the IR_A1 Standard View.
This may not be the case if the content provider operates a cache server.
The audit must carry out 15 outside tests.
This must result in 30 Total_Item_Requests in the IR_A1 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must allow at least 31 seconds between each of the 30 tests.
A content provider will pass this audit test when the sum of the Total_Item_Requests reported by the content provider in IR_A1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests on the auditor’s report.
Audit tests A1-1 and A1-2 must take place in separate accounts so that each audit test can be separately reported.
Reports on multimedia requests at the item level.
An audit of this Standard View requires the following:
The auditor must have access to all multimedia content available by the content provider.
Audit-test M1-1: Total_Item_Requests
The auditor must make a total of 100 requests on a subset of multimedia items.
This must result in 100 Total_Item_Requests being reported in the IR_M1 Standard View.
The auditor must allow at least 31 seconds between each test.
A content provider will pass this audit test when the sum of the Total_Item_Requests reported by the content provider in IR_M1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests on the auditor’s report.
Audit-test M1-2: Total_Item_Requests 30-second filters
To ensure that the report is counting correctly as per the COUNTER Code of Practice, it is important that the browser cache settings of the machines used for testing are disabled. It is also important that the auditee confirms before the audit period if they operate a cache server. If they do, this test will not report as the Code of Practice expects and is likely to under-report successive searches outside the double-click threshold.
The audit-test consists of clicking links to a multimedia item twice in succession (double-clicks). If the two clicks occur within a 30-second time-span, only the second Total_Item_Requests must be recorded. If the two clicks occur with more than 30 seconds between, then 2 Total_Item_Requests must be counted.
The auditor must carry out a total of 30 tests, and each test will consist of 2 requests). There are 2 types of tests that must be carried out:
“Inside” tests (Two requests are made to the same multimedia item and the second request is made within 30 seconds of the first).
“Outside” tests (Two requests are made to the same multimedia item, and the second request is made more than30 seconds after the first).
The auditor must carry out 15 inside tests.
This must result in 15 Total_Item_Requests in the IR_M1 Standard View.
This may not be the case if the content provider operates a cache server.
The audit must carry out 15 outside tests.
This must result in 30 Total_Item_Requests in the IR_M1 Standard View.
This may not be the case if the content provider operates a cache server.
The auditor must allow at least 31 seconds between each of the 30 tests.
A content provider will pass this audit test when the sum of the Total_Item_Requests reported by the content provider in IR_M1 Standard View for the auditor’s test account is within a -8% and +3% reliability window of the sum of the Total_Item_Requests on the auditor’s report.
Audit tests M1-1 and M1-2 must take place in separate accounts so that each audit test can be separately reported.
Stage 3. Report Delivery: Checking delivery of the reports¶
In addition to verifying the delivery of reports in a tabular format, the auditor will check that the COUNTER reports are downloadable using the SUSHI protocol. This may be tested using the COUNTER Report Validation Tool, an open-source tool that provides a series of web-forms and guidance to take users through the steps and parameters needed to connect successfully to SUSHI servers and download content provider reports. The COUNTER Report Validation Tool may be found at: https://www.projectcounter.org/validation-tool/.
A content provider will only pass an audit test if the JSON-formatted report produced via SUSHI matches the total of the relevant usage counted on the equivalent tabular report offered by the content provider. In other words, a report should produce the same results irrespective of the format in which it is delivered.
Appendix F: Handling Errors and Exceptions¶
As a rule, the structure of the SUSHI response will be governed by the SUSHI schema; therefore, any error conditions that can be reported will be specified within the SUSHI response. The following is a definition of from the COUNTER_SUSHI API Specification that shows the format of the exception.
“SUSHI_errorModel”: {
“type”: “object”,
“description”: “Generalized format for presenting errors and exceptions.”,
“required”: [
“Code”,
“Severity”,
“Message”
],
“properties”: {
“Code”: {
“type”: “integer”,
“format”: “int32”,
“description”: “Error number. See table of errors.”,
“example”: 3040
},
“Severity”: {
“type”: “string”,
“description”: “Severity of the error.”,
“example”: “Warning”,
“enum”: [
“Warning”,
“Error”,
“Fatal”,
“Debug”,
“Info”
]
},
“Message”: {
“type”: “string”,
“description”: “Text describing the error.”,
“example”: “Partial Data Returned.”
},
“Help_URL”: {
“type”: “string”,
“description”: “URL describing error details.”
},
“Data”: {
“type”: “string”,
“description”: “Additional data provided to clarify the error.”,
“example”: “Usage data has not been processed for all months.”
}
}
}
As indicated in the JSON code above, multiple exceptions can be returned and the exceptions have the following elements:
Code: is a numeric exception number that identifies the exception. See table F.1 for permissible values.
Severity: indicates if the exception is one of:
Fatal: unable to complete the transaction. The problem is with the service and may be temporary and a retry could be successful. No report is returned. Example: Service Busy.
Error: unable to complete the transaction. The problem is with the request such that a retry will not be successful unless the request or other configuration details change. No report is returned. Example: Requestor Not Authorized to Access Service.
Warning: The transaction can be completed, and a report is returned, but the report may differ from what was expected. Examples: Usage Not Ready for Requested Dates; Partial Data Returned
Debug: reserved for use by developers as a means of providing additional data about the request or response to the calling application.
Message: textual description of the exception. For exception Codes > 999 the Message must exactly match column 1 in table F.1.
Data: additional optional data that further describes the error. Example: for “Partial Data Returned” exception, the Data could state “You requested 2017-01-01 to 2017-12-31; however, only 2017-01-01 to 2017-06-30 were available.”
Help_URL: an optional variable that includes the URL to a help message that explains the exception in more detail.
Table F.1 provides a list of possible exceptions that may occur for the COUNTER_SUSHI API. Note that some of the exceptions also may occur for tabular reports.
Table F.1 (below): Exceptions
Exception Description (message) |
Severity |
Exception Number (code) |
Invocation Conditions |
---|---|---|---|
Info or Debug |
Info |
0 |
Any. These Messages will never be standardized and service providers can design them as they see fit. |
Warnings |
Warning |
1-999 |
Any. This range is reserved for the use of service providers to supply their own custom warnings. |
Service Not Available |
Fatal |
1000 |
Service is executing a request, but due to internal errors cannot complete the request. |
Service Busy |
Fatal |
1010 |
Service is too busy to execute the incoming request. Client should retry the request after some reasonable time. |
Report Queued for Processing |
Warning |
1011 |
Services queuing incoming report requests must return a response with this exception and no payload to inform the client about the processing status. Client should retry the request after some reasonable time. Note: This Exception was included in the amendments published on 11 December 2018 but initially was missing from Release 5.0.1. |
Client has made too many requests |
Fatal |
1020 |
If the server sets a limit on the number of requests a client can make within a given timeframe, the server will return this error when the client exceeds that limit. The server would provide an explanation of the limit in the additional Data element (e.g., “Client has made too many requests. This server allows only 5 requests per day per requestor_id and customer_id.”). |
Insufficient Information to Process Request |
Fatal |
1030 |
There is insufficient data in the request to begin processing (e.g., missing requestor_id, no customer_id, etc.). |
Requestor Not Authorized to Access Service |
Error |
2000 |
If requestor_id is not recognized or not authorized by the service. |
Requestor is Not Authorized to Access Usage for Institution |
Error |
2010 |
If requestor_id has not been authorized to harvest usage for the institution identified by the customer_id, or if the customer_id is not recognized. |
APIKey Invalid |
Error |
2020 |
The service being called requires a valid APIKey to access usage data and the key provided was not valid or not authorized for the data being requested. |
Report Not Supported |
Error |
3000 |
The requested report name, or other means of identifying a report that the service can process is not matched against the supported reports. |
Report Version Not Supported |
Error |
3010 |
Requested version of the report is not supported by the service. |
Invalid Date Arguments |
Error |
3020 |
Any format or logic errors involving date computations (e.g., end_date cannot be less than begin_date). |
No Usage Available for Requested Dates |
Error |
3030 |
Service did not find any data for the date range specified. |
Usage Not Ready for Requested Dates |
Error, Warning |
3031 |
Service has not yet processed the usage for one or more of the requested months, if some months are available that data should be returned. The exception should include the months not processed in the additional Data element. |
Usage No Longer Available for Requested Dates |
Warning |
3032 |
Service does not have the usage for one or more of the requested months because the requested Begin_Date is earlier than the available data. If some months are available that data should be returned. The Exception should include the months not processed in the additional Data element. Note: This Exception was included in the amendments published on 11 December 2018 but initially was missing from Release 5.0.1. |
Partial Data Returned |
Warning |
3040 |
Request could not be fulfilled in its entirety. Data that was available was returned. |
Parameter Not Recognized in this Context |
Warning |
3050 |
Request contained one or more parameters that are not recognized by the server in the context of the report being serviced. The server should list the names of unsupported parameters in the additional Data element of the exception. Note: The server is expected to ignore unsupported parameters and continue to process the request, returning data that is available without the parameter being applied. |
Invalid ReportFilter Value |
Warning |
3060 |
Request contained one or more filter values that are not supported by the server. The server should list the names of unsupported filter values in the additional Data element of the exception. Note: The server is expected to ignore unsupported filters and continue to process the request, returning data that is available without the filter being applied. |
Incongruous ReportFilter Value |
Warning |
3061 |
A filter element includes multiple values in a pipe-delimited list; however, the supplied values are not all of the same scope (e.g., item_id filter includes article level DOIs and journal level DOIs or ISSNs). |
Invalid ReportAttribute Value |
Warning |
3062 |
Request contained one or more report attribute values that are not supported by the server. The server should list the names of unsupported report attribute values in the additional Data element of the exception. Note: The server is expected to ignore unsupported report attributes and continue to process the request, returning data that is available without the report attribute being applied. |
Required ReportFilter Missing |
Warning |
3070 |
A required filter was not included in the request. Which filters are required will depend on the report and the service being called. For example, if the service requires that the request define the Platform name and no Platform filter is included, an exception would be returned. In general, the omission of a required filter would be viewed as an <em>Error</em>; however, if the service is able to process the request using a default value then a <em>Warning</em> can be returned. The additional Data element of the exception should name the missing filter. |
Required ReportAttribute Missing |
Warning |
3071 |
A required report attribute was not included in the request. For example, if the service requires that the request define the Platform name and no Platform filter is included, an exception would be returned. In general, the omission of a required attribute would be viewed as an <em>Error</em>; however, if the service is able to process the request using a default value, then a <em>Warning</em> can be returned. The additional Data element of the exception should name the missing filter. |
Limit Requested Greater than Maximum Server Limit |
Warning |
3080 |
The requested value for limit (number of items to return) exceeds the server limit. The server is expected to return data in the response (up to the limit). The Message element of the exception should indicate the server limit. |
Note 1: An Error does not interrupt completion of the transaction (in the sense of a programmatic failure), although it may not return the expected report for the reason that is identified. A Fatal exception does not complete the transaction; the problem may be temporary and a retry could be successful.
Note 2: Optional response: Service may respond with the additional exception of Info level and include additional information in the Message. For example, if the client is requesting data for a date range where the begin_date is before what the service offers, the service might include a HelpURL that can provide more information about supported dates.
Note 3: If multiple exceptions are discovered, each exception should be returned in its own element.
Note 4: Clarifying details about an exception (e.g., the filter that was missing or deemed invalid should be added to the Data element or, for custom warnings, the Message element of the exception so that the caller knows what to correct).
Note 5: If the caller gets the baseURL, the version, or method wrong, the expectation is that they will receive an HTTP 404 error since the specified path is not valid.
Appendix G: List of Federated Search Products¶
The following are lists of known (to COUNTER) federated search products and user-agent values that may be used to identify federated search activity for reporting as Searches_Federated in Database Reports.
NOTE: These lists are for reference purposes only and may not represent all current Federated Search Products (please contact COUNTER with updates).
Table G.1: Federated Search Products
Federated Search Product |
Vendor |
---|---|
360 Search |
|
EBSCOhost Integrated Search |
|
Enterprise (Federated Search) |
|
EOS.Web |
|
MetaLib |
|
SEARCHit |
Table G.2: Federated Search Agent “User Agent” values
Federated Search User Agent |
---|
AGENTPORT-SCOCIT |
AGENTPORT-SDICIT |
AHMKEYS-SCOCIT |
AHMKEYS-SCOFUL |
ARCHIMINC-SCOCIT |
ARCHIMINC-SDICIT |
CITAVI-SCOCIT |
CITAVI-SDICIT |
COSMADRALI-SCOCIT |
COSMADRALI-SDICIT |
DEEPEX-SCOCIT |
DEEPEX-SDIABS |
DEEPEX-SDICIT |
EDINGET-SCOCIT |
EDINGET-SDICIT |
ENCOMP-SCOCIT |
ENCOMP-SDIABS |
ENCOMP-SDICIT |
GROGRO-SDICIT |
HENKINTRA-SCOCIT |
INERAEX-SCOCIT |
INTELLIFED-SCOCIT |
INTELLIFED-SDICIT |
MEKPAPERS-SCOCIT |
MEKPAPERS-SDICIT |
METALIB-SCOCIT |
METALIB-SDICIT |
MUSESEARCH-SCOCIT |
MUSESEARCH-SDICIT |
NJIT-SCOCIT |
NRLNAVY-SCOCIT |
OCLCPICAZ2-SCOCIT |
OCLCPICAZ2-SDICIT |
OOIPSDWID-SDICIT |
POTIRORDY-SCOCIT |
POTIRORDY-SDICIT |
QES-SCOCIT |
QES-SDICIT |
QINETIQ-SCOCIT |
RIGHTS-SDIABS |
RITENSE-SCOCIT |
SERSOL-SCOCIT |
SERSOL-SDICIT |
SYSONEMCKIN-SCOFUL |
SYSONEMCKIN-SDIABS |
TDNETDF-SCOCIT |
TDNETDF-SDICIT |
TDNSRCHR-SCOCIT |
TDNSRCHR-SDICIT |
UAG-SCOCIT |
UMIARERES-SCOCIT |
UWASOCR-SCOCIT |
UWASOCR-SCOFUL |
VSPACES-SCOCIT |
VSPACES-SDICIT |
WEBFEAT-SCOCIT |
WEBFEAT-SDICIT |
Appendix H: Sample COUNTER Master Reports and Standard Views¶
The Master Reports and Standard Views in the following table are organized by reporting level with Platform first followed by Database, Title and ending with Item. Within the reporting-level, the Master Report appears first followed by the Standard Views. Click the highlighted view link to see the corresponding tabular sample.
Table H.1: Sample COUNTER Master Reports and Standard Views
Report_ID |
Report_Name |
Tabular Sample |
---|---|---|
PR |
Platform Master Report |
|
PR_P1 |
Platform Usage |
|
DR |
Database Master Report |
|
DR_D1 |
Database Search and Item Usage |
|
DR_D2 |
Database Access Denied |
|
TR |
Title Master Report |
|
TR_B1 |
Book Requests (Excluding OA_Gold) |
|
TR_B2 |
Book Access Denied |
|
TR_B3 |
Book Usage by Access Type |
|
TR_J1 |
Journal Requests (Excluding OA_Gold) |
|
TR_J2 |
Journal Access Denied |
|
TR_J3 |
Journal Usage by Access Type |
|
TR_J4 |
Journal Request by YOP (Excluding OA_Gold) |
|
IR |
Item Master Report |
|
IR_A1 |
Journal Article Requests |
|
IR_M1 |
Multimedia Item Requests |
Appendix I: List of internet robots, crawlers and spiders¶
The growing use of internet robots, crawlers and spiders has the potential to artificially inflate usage statistics. Only genuine, user-driven usage should be reported in COUNTER usage reports. Usage of full text articles that is initiated by automatic or semi-automatic bulk download tools, such as Quosa or Pubget should only be recorded when the user has clicked on the downloaded full-text article in order to open it.
Activity generated by internet robots, crawlers and spiders must be excluded from all COUNTER usage reports.
This list of internet robots, crawlers and spiders was published in April 2016 and updated July 2016. Please note it is rationalised, removing some previously redundant entries (e.g. the text ‘bot’ - msnbot, awbot, bbot, turnitinbot, etc. - which is now collapsed down to a single entry ‘bot’).
The list is displayed below and also available here https://github.com/atmire/COUNTER-Robots
This page will always show the readme and give potential users and contributors of the list more information on how to integrate the list.
Please let us know of any user agents that should be included in this list or to suggest other amendments.