Competition or Standardisation
A five-layer model of the necessary rights infrastructure is proposed: the five layers are:
(1) the legal (statutory and judicial) layer;
(2) the contractual and licence layer,
(3) the copyright accounting and settlement systems layer (CASS),
(4) the electronic copyright management systems layer (ECMS) and
(5) technical copyright protection systems (TCPS).
For an effective market to operate in the best interests of all concerned, all five layers must themselves be effective.
The statutory and judicial layer is already the subject of considerable discussion by, inter alia, WIPO, the US Government and the EU, and will not be examined here in detail. Similarly, much work has been done on developing new contracts and licences for electronic publication. The remaining three layers are in principle mechanisable: they are the implementation of Charles Clarks dictum, the answer to the machine is in the machine.
A range of projects, both publicly-funded - for example under the EUs Framework initiative - and privately-funded by major players in the IT market have proposed systems for the three mechanisable layers. It is argued that competing but generally compatible systems operating in each of the three layers will produce the most dynamic overall solution, and some of the remaining technological challenges are examined. In particular, the different approaches of various technical copyright protection systems are discussed.
Rights Clearance
Rights clearance is perhaps the single most important requirement for the economic exploitation of the information superhighway or global information infrastructure , and the current lack of an effective system of rights clearance is thought by many to be a major impediment to the growth of the information society.
An effective rights clearance infrastructure will:
(a) ensure that material distributed over the global network will earn a fair return for its owners - the authors who created it and the investors who provided the necessary capital, and
(b) ensure that society as a whole can benefit from the wider and faster availability of new and existing forms of material.
If creators and investors cannot earn a fair return, the inevitable result will be a dearth of quality material - and a corresponding deluge of rubbish, because quality material is intrinsically costly to produce. Although the new technology is reducing some of the costs associated with producing good material, it is also bringing in new costs. It is much cheaper to produce quality text now than it was when material had to be set in metal type; however, text is no longer always the best method of conveying information. Multimedia material requires investment, in real terms, at least as great as that which in a previous era was required for typesetting. If society is to enjoy quality material, those who create that material and those who invest financially in its production must be able to earn a return on their investment of creativity and money. This means that they must be able to charge a fair price for its use, and hence that they need to be able to control how it is used.
These economic fundamentals have not changed since the invention of the printing press. Copyright, the legislative solution developed in the three centuries following Gutenberg, remains the only realistic contender for todays new technology. It is intrinsically generic: it is not books and their printers which are protected by copyright, but works and their authors. The fundamental copyright principle is that there is a property right - copyright - in every creative work which belongs first to the author of that work. The strength of this principle lies in its simplicity but above all in its self-evident natural justice. However, the nature of the authors property right has changed with technology, and will change further.
However, whatever the rights that creators and investors have under the legislation, users must be able to use them. This is the function of rights clearance. As well as defining the rights, legislation can be used to clear them, by the establishment of compulsory licensing.
Legislation alone is not, however, sufficient to ensure a return for authors and investors. At the very least, there needs to be a system of contracts, so that the author can attract financial investment in the work. This is the essence of publishing; the publisher is above all else a financial risk-taker who puts his or her money behind the authors work. The publisher earns a return on the sales of copies of the authors work, and he or she uses copyright law to prevent competitors from producing identical copies. The publishers investment is protected by the authors copyright, and a contract is drawn up between author and publisher giving the publisher some interest in the authors copyright in return for the investment. In scholarly STM publishing, that contract is today almost always a straightforward transfer or assignment: but this practice is coming under some pressure, and it cannot be assumed that it will last into the future.
The single most significant result of todays technology is the simplicity of reproduction. Until the middle of this century, reproduction entailed nearly as great an investment as the preparation of an original. Digital techniques however mean that today reproductions indistinguishable from the original can be made by anyone with the equipment needed to read material first supplied in digital form. The ease with which copies can be made, and copyright infringed, is causing those involved in the business to take a close look at the entire question of copyright.
Allied to the potential erosion of the economics of publication, there is concern that the flexibility of the new technology threatens the integrity of a publication. The risk to integrity is a consequence of the technology itself.
As todays unpaid networks grow, commercial reality is emerging in a way which no- one had intended: advertising is becoming a dominant feature of the networks. Comparison with the world of paper confirms the principle - that the only information given away for free is advertising. Brochures are free, books cost money. Even the universities, who are always vociferous in support of the principle of freely-available information provided free have tended to concentrate on getting their prospectuses, rather than research papers, on to the World Wide Web.
Modern information technology will be wasted if it is just another advertising medium. It is in the interests of global society that the global network should carry useful information, and if it is to do so, it must carry paid-for information. My learned colleague Charles Clark has coined a phrase, which is well worth repeating: the answer to the machine is in the machine. Computer technology is - at least theoretically - capable of enforcing copyright, just as it is capable of infringing copyright. It is equally capable of protecting the integrity of material.
The technical problems, however, are considerable. Computers and networks only really created a major impact when open standards became accepted. The open standards, which allow different computers to be connected together, also allow free transmission of material. Systems to control copyright abuse and protect integrity must respect open standards and the wide interconnection of computers. It is relatively easy to ensure that information is protected in a closed computing environment - the military have developed security systems based on keeping sensitive information permanently inside a trusted computing base, making information available on a need- to-know basis. This approach is clearly unacceptable to authors and publishers, who need to make their material as widely available as possible to a paying readership. The market dictates that the standards to which a protection system has to work are those used by potential readers.
Rights clearance is functional at a number of levels. In this paper, five levels or layers have been identified.
Statutory
Contractual
Copyright Accounting and Settlement Systems (CASS)
Electronic Copyright Management Systems (ECMS)
Technical Copyright Protection Systems (TCPS)
A system for rights clearance will incorporate functionality at all five levels.
1. Statutory
The principle statutory requirement is for an effective law of copyright, giving
property rights to those who create, and making the unauthorised use of the material
an infringement of those property rights. Much discussion has taken place on the
precise nature of the property rights involved and the nature of the uses which should
be restricted. Copyright pure and simple provides for copying to be a restricted act,
but technology and social change have brought other activities into the ambit of
copyright. These issues, including the proposed new transmission right, are perhaps
best discussed in other sessions. Integrity is also protected at the statutory level under
the droit moral.
There are further arguments for statutory reforms. On the one hand, groups representing copyright holders have argued for hardware-based copyright protection devices to be incorporated by law into all new hardware; on the other hand, there is considerable pressure for statutory licences which will force reluctant copyright holders to permit their material to be used over electronic networks, possibly in exchange for some statutorily-determined fee by way of equitable remuneration. However, statutory reform in democracies is a slow and cumbersome process, and if technology and the marketplace can achieve the same ends then that route is preferable.
2. Contractual protection
The contractual element is fundamental. Without contractual enforcement,
technical rights clearance is largely unworkable. For example, it can be a condition
of the supply of any material that it may only be used in association with a specified
technical protection system: thus, any copy of the material used without the protection
system is an infringing copy. At present, this form of protection is widely used to
protect material published on paper. Publishers have not generally licensed electronic
copies or produced electronic versions of paper publications outside a few
experimental sites. Thus, an electronic copy when discovered is clearly an
infringing copy. The person who scans or retypes from a printed original into an
electronic store is committing a deliberate act of copying without the consent of the
copyright holder. This fact has prevented the practice of document scanning of
copyright material becoming widespread; publishers have been making much more
categorical statements on the subject than ever they did when photocopying first
emerged.
Alternatively, automated means of ensuring compliance must be developed, but even so, there is a considerable task involved in mapping the contractual terms, drafted in legal language, to the functions which can be controlled by machine.
3. Copyright Accounting and Settlement Systems
The CASS layer provides for billing and settlement. It is a vital part of the future,
which has perhaps been somewhat neglected. Unless it is quick and simple to make a
payment, then there will be temptations to commit piracy. It is indeed arguable that
the main problem today in photocopying is not the cost of photocopying permissions,
but the difficulty in obtaining and granting them. Some of the reproduction right
organisations (RROs) have developed accounting and settlement systems for
photocopying, but further investment is needed. However, the task is primarily one
of re-engineering rather than re-invention: accounting and settlement by computer is
already well established. Digital payment systems such as the emerging Digicash and
Mondex systems may substitute for specialised copyright systems in the personal
market. However, when dealing with institutions, an accounting and settlement system
giving rise to periodic invoices settled through the regular banking system has
advantages. This is an area in which the RROs are likely to be able to realise
substantial economies of scale by establishing a collective CASS, and in the strict
terms used by RROs, this is the major part of what is meant by collective
administration.
4. Copyright Management
In addition to the technology protecting material from abuse, legitimate use has
to be properly managed. This is the function of an ECMS (Electronic Copyright
Management System). The CITED (Copyright In Transmitted Electronic Documents)
group produced one of the first specifications for an ECMS. It examined the issue of
authorising legitimate use and identified a number of tools and data elements which
would be required, firstly to ensure that the legitimate user accessed the material and
secondly to ensure that accesses were properly recorded, so that the rights-holder
could be fairly rewarded. Authorisation management is required whichever basic
access technology is adopted. The principle contribution of the CITED group has
been clearly to identify firstly, the notion of an event as the
minimum accounting unit, and secondly to define the data triplet of {agent, usage,
information} which characterises each event. The functionality of the CITED tools
follows from these fundamental definitions, but the concepts are intrinsic to the issue
and will emerge from any ab initio analysis. The CITED model also
introduces a number of procedures for assuring integrity.
One function of the ECMS layer is to deal with the problem of multiple authorisation from multiple rights-holders. This is not yet perceived as a pressing problem in the scholarly publishing field, but it cannot be ignored. Firstly, there is the question of the authors rights. In Europe, authors unions - who may, however, not be particularly representative of academic contributors - are becoming more and more vocal on this question, and in some jurisdictions the author already has some form of inalienable right. In the US, the challenge is coming from a different direction, notably the library and academic community itself. It has always been assumed that the academic contributor personally has the right to assign copyright to the publisher, but it is at least arguable that the original copyright in fact belongs to the researchers employer. If this is the case, then most copyright assignments made by a contributor to a publisher are in fact worthless. The legal validity of these positions is a complex point which cannot be discussed here, but the fact of their existence must be taken into account when designing copyright management systems.
Secondly, there is the multimedia question. It is rapidly becoming clear that one of the strengths of electronic publication lies in the publication of material other than words - and indeed, the range of multimedia technologies may offer better ways of communicating scientific knowledge than words themselves. Film, sound and multimedia production requires greater investment and producers are not likely to agree to a simple assignment of rights. Creating multimedia works is often an act of compilation, and the source may well be a compilation or anthology itself. Keeping tabs on the relevant rights through several generations of multimedia work is an essential rôle for an electronic copyright management system, and presents some interesting technical challenges.
However, the most interesting technical work today comes at the fifth layer in the model,
5. Technical Copyright Protection Systems (TCPS)
Technical copyright protection systems protect material from unauthorised use
by various technical means integrated into the material and the manner of its
presentation. The problem was thought to be insuperable on account of the mythical
abilities of teenage hackers; their ability to exploit weaknesses in computer systems
has probably been exaggerated by the media, but the media attention has nevertheless
prompted the military and the financial services sector, as well as the computer
industry, to concentrate on computer security issues which in the past may have been
neglected.
Now, however, as computers become part of the mainstream for most people in the developed world the problem of the adolescent hacker must be put into perspective. Most people cannot be bothered with the tedium of hacking, just as most people prefer to get their money out of the bank legitimately by using an ATM card rather than gelignite. Protection systems which are impossible to break are impossible to build, but protection systems which make it easier to behave legitimately than illegitimately are eminently practical.
At present, three principal protection technologies are emerging. Various international groups are pursuing these technologies with a view to establishing a workable system.
The three technologies may be called Encryption, Tattooing and Fingerprinting, and each has its advantages and disadvantages. The three approaches are complementary, and practical working systems are likely to incorporate aspects of all three. Although the motivation behind the development of these systems is primarily economic, they also provide an intrinsic protection of the integrity of the material.
Encryption:
This is sometimes referred to as wrapping or enveloping, but it
is essentially an encryption process. The material to be protected is encrypted so that it
can only be read using a key. The key is issued to authorised users, either in return for
payment or following proper confirmation that the user is authorised to read the
material.
Encryption is the only technology which prevents the first unauthorised use (modern encryption systems can be virtually unbreakable). However, once decrypted, material is vulnerable to abuse. Most current encryption technologies ensure that decryption only takes place into a secure environment, either within a trusted institution or into a restricted and controlled environment set up on the users computer. The COPICAT project under the EUs Esprit programme uses this latter approach.
Encryption, however, has a number of other disadvantages. The technology is militarily sensitive, (we are only now beginning fully to appreciate the decisive rôle that cryptography and crypto-analysis played in determining the outcome of the Second World War) and the security services are anxious to prevent it falling into undesirable hands. They are certainly too late, as powerful encryption software is freely available on the Internet. A program called Pretty Good Privacy can easily be downloaded. Attempts by governments to control encryption technology have failed, but it is still illegal to use encryption in a number of jurisdictions and encryption technology may still not legally be exported from the United States of America.
A further problem at present is that encryption/decryption can impose a heavy overhead on machine time, sometimes slowing down the operation unacceptably. Although this problem is likely to be resolved by the growth in processor power, the same effect also makes encryption more vulnerable to the brute force attack of trying every possible combination. Nevertheless, for copyright purposes the risks and threats are much lower than those faced by the financial sector and the military; we can afford to be followers rather than leaders.
Encryptions great weakness is that the decrypted file can be read by anyone. Systems which use encryption for the purposes of copyright protection deal with this problem by permitting decryption only into a controlled environment, from which material cannot be transmitted or even saved (but, by contrast, the encrypted form can be freely transmitted and saved). This method is adopted by the COPICAT project, in which the CLA is a partner, and will be put to intensive testing during the first half of this year, when computer science students at University College Dublin will be asked to attempt to crack the protection methods.
Integrity is potentially at risk if the material can be decrypted, altered, then re- encrypted using the same key. The most secure encryption techniques use asymmetric keys, which automatically deal with this threat; with symmetric-key encryption (where the same key is used to encrypt as to decrypt), additional checks are normally built into the system.
Tattooing
Tattooing (or watermarking) involves the creation of a permanent, indelible mark
in the digital record. A range of technologies have been developed, of which the
UDID system described by Douglas Armati and developed by Professor Turner at
Imperial College London is perhaps the most comprehensive. Under the UDID
system, (which is much more than merely a tattooing system) what is recorded is the
data identifier. Similar systems have been developed by the music industry. A
particular strength of these systems is the ability to survive digital-to-analogue
conversion and analogue-to-analogue copying; removing the tattoo or watermark
results in unacceptable degradation of the image. This feature also provides intrinsic
integrity protection.
A tattooing system cannot by itself deal with the problem of unauthorised use. In most cases it is envisaged that the tattoo is read by a metering system which is then used to calculate payments based on usage.
Tattooing requires some data redundancy in the file to accommodate the tattoo: very compact file formats, such as text in 7-bit ASCII, may not be able to provide this space, although in any bit-mapped representation there is always ample space.
Fingerprinting
Fingerprinting is an extension of tattooing. As well as the mark identifying the
material, a mark identifying the user or the users system is included by the users
system at the time of use. This allows the right person to be charged the fee and
creates an identifying trail allowing a source of infringing copies to be traced. The
term is not, however, always used in this way; one commercial organisation offers a
tattooing technology for the protection of bit-mapped images which it describes as a
fingerprinting technique.
These three broad protection techniques are not mutually exclusive; indeed, they are complementary. At present, most current proposals incorporate only one of the technologies, but as these systems come to the market place the development of a service providing both encryption and tattooing seems inevitable.
Competition or
Standardisation
The world of information technology has proved remarkably resistant to the
imposition of standards. The process of creating standards by standards institutions is
slow, whereas the speed of technological change is fast. The process which seems to
be the most successful is the use of commercial open systems. The IBM Wintel
architecture for PCs is acknowledged by most observers to be markedly inferior to the
Apple MacIntosh, yet it is by far the most widespread. With hindsight, even Apple
now concedes that it was wrong to exercise proprietary control over its operating
system for so long. The adoption of the CD and the CD-ROM as a near universal
standard arose when its inventors, Philips and Sony, agreed to make the specification
publicly available through the range of differently coloured books.
Standardisation in IT emerges as a de facto process, not usually as the result of co-operative initiatives or from the work of standards organisations. This is likely to be the case with protection and identification methods as well. Protection technology is in its infancy, but enormous progress has been made over the last few years. The growing realisation of the importance of content has made the IT companies aware of the need to protect content, and a number of them are involved in developing projects. IBM has recently announced a new system, called the Information Market, which protects material in transit using an encryption technique called Cryptolopes. Netscape have developed technologies to improve the security of the world wide web, including the Secure Sockets Layer (ssl) and the Secure Hyper Text Transport Protocol (shttp): to speed adoption, they have placed this technology in the public domain although at present only their servers and browsers can work with it.
The speed with which technology develops means that a system which is secure today will be insecure tomorrow; constant development is necessary. Nevertheless, for an effective market to develop, there must be some measure of consensus and standardisation. It is an open question as to whether the market or a formal standardisation process based on co-operation and the involvement of standards organisations is most likely to achieve the desired objectives. The theoretical benefits of formal standardisation are considerable, but its record in the world of information technology is at best patchy. Commercial open systems has been consistently the most successful approach. First, a commercial organisation places into the public domain all the interface features of its technology. This means that it is open to competitors to develop technology which matches that interface, as in IBM- compatible computers. The interface becomes a standard, and evolves with time; competitors frequently create supersets of the original - so that new products are compatible with but better than the original. The evolution of the HTML standard driven by the development of Netscape browsing tools is an example of this process in action.
At the same time, there is a task of consensus building to be done - in which meetings such as this UNESCO-ICSU conference play an important part. The EU has funded a major consensus-building exercise in the form of the IMPRIMATUR project. IMPRIMATUR will be running a series of conferences and special interest group meetings throughout 1996, and although there is a real danger of electronic publishing conference fatigue, the IMPRIMATUR meetings may well prove to be extremely influential. The participation of the scientific publishing community in such wider fora is clearly in the best interests of science.
Conclusion.
The management consultants McKinsey, who periodically introduce new buzzwords
to promote the expensive services provided by their highly-paid consultants recently
announced that business process re-engineering is a thing (or at least a buzzword) of
the past: the new approach is Do it, then fix it. In software development this
approach is well-known and has recently been formalised as the rapid prototyping
technique; it is anathema to old-fashioned engineers but it works. Without paying
McKinseys consultancy fees, Do it, then fix it seems to this speaker to be the right
approach to copyright protection and electronic publishing.
I have said little about the potential involvement of Reproduction Rights Organisations in this business. This is partly because it is a contentious point. Personally, I believe that RROs will prove to be the organisations best able to handle rights clearance on behalf of rights-holders, but equally I would strongly oppose any movements to create a statutory rôle for RROs in this area. It would be as foolish to adopt a position permanently excluding RROs from the administration and management of rights clearance as it would be to grant them the exclusive monopoly of rights clearance.
During the next two or three years, a range of commercial organisations will be offering copyright holders their own proprietary solutions to the protection problem. Some will protect material very well, but make it hard or slow to use (for example, some of the strong encryption technologies). Others will provide limited protection, but impose little or no overhead on users. Yet others will offer rights-holders the chance to tune the level of protection for each work, so that more valuable material is more strongly protected (but perhaps more inconvenient to use). It is too early to predict which will be adopted by most rights-holders, but they have one thing in common. They put the protection decisions back in the hands of the copyright holder, where they belong - including the fundamental decision, whether to protect at all.
Return to the Table of Contents
Listing by Author's Name, in alphabetical order
Return to the ICSU Press/UNESCO Conference Programme Homepage
University of Illinois at Urbana-Champaign
The Library of the University of Illinois at Urbana-Champaign
Comments to: Tim Cole
06.24.97 RL