Joint ICSU Press/UNESCO Expert Conference on ELECTRONIC PUBLISHING IN SCIENCE

UNESCO, Paris, 19-23 February 1996

Tools and standards for protection, control and presentation of data

By: DOUGLAS ARMATI Consultant,Woodbridge,Suffolk,UK

Table of Contents


The Focus

What tools and standards are currently available to facilitate protection, control and presentation of data? What are the desirable technologies for achieving these objectives? What promising proposals are there? What is the likelihood of their widespread acceptance?

Return to the Table of Contents


The current situation

The range of tools designed to help authors, publishers and others protect, control and present their data is expanding.
Standards, however, although generally plentiful, are not yet as inter-operable as they need to become if a smooth information society infrastructure is to be built. The recent joint ITU/ISO/IEC meeting in Geneva to plan coordination of standards required for the Global Information Infrastructure (GII) and ongoing work at the American National Standards Institute (ANSI)Information Infrastructure Standards Panel (IISP) and in many other standards bodies internationally, including those in the publishing industry, promise to provide the focus.

For the moment, especially in the networked environment, a fevered quest is on to stretch the technological boundaries on all fronts simultaneously. This makes any long term prediction or commitment to one approach or another a highly hazardous proposition.

Three things are evident:

1) The drive to make all information systems inter-operable, including those related to protection, control and presentation, is gathering momentum.
2) Chosen tools and standards must be capable of scaling globally.
3) The mounting interest in object oriented computing, in particular the Java interactive programming language, can be expected to positively impact the approach to these problems.

It is currently possible to build a world class integrated electronic publishing system by combining the following readily available elements:

Such systems provide a degree of data protection, control and presentational assurance while the data remains in the server system. Once delivered to the client, however, other tools must be used. When dealing with valuable works in open communications networks, then, what tools can the owners of the rights in those works use, to which standards can they refer, in order to:

It is these aspects of protecting, controlling and presenting the data after it has left the security of the owner’s system that is demanding most attention. Let us look at each aspect in turn, focusing on those elements of particular relevance to electronic publishing in science.

Return to the Table of Contents


Protection

What do we mean by protection? Keeping data safe, defending it from attack, guarding it, preserving it from danger. As there is a cost to such protection, clearly it is only worth keeping something safe if it is considered more valuable than the cost of protecting it. So the level of protection used is a function of perceived value. By this standard a great deal of human data generation and interaction is almost valueless. Very little data is protected in any other than the most basic fashion. Much scientific exchange is in this category and will remain so. Once a market is created then value emerges and more sophisticated data protection may become appropriate. What is available? Leaving physical security of hardware to one side, the main techniques of data protection involve the erection of a barrier between the core value and potential users, both legitimate and otherwise, of that value. Generally this is achieved by encryption and machine, network and file access controls. These may be of greater or lesser complexity depending on the situation and the value of the data being protected.

Return to the Table of Contents


Control

Control systems are designed to allow the owners of valued data assets to institute means of checking, regulating, supervising, verifying and, if necessary, restraining access to and uses of those assets.

Having the ability to control a data asset is a pre-requisite to conducting transactions related to it. As transactions produce revenues it is in this area that most work is focused. Control of data in distributed networks is extremely complex. It involves the use of:

Return to the Table of Contents


Presentation

As part of the process of maximizing its value, data is increasingly manipulated by software and hardware tools in order to optimize the quality and hence the value of its presentation. As a result, tools that allow the owner of valuable data to permanently maintain the integrity of the intended presentation format after the data object is transmitted to a user are useful. This is particularly the case in contexts where an author’s reputation and the value of their creation may depend on the precise communication of the original version of a work. This is particularly relevant in scientific publishing.

In this area, portable document formats represent the state of the market, at least for documents carrying only text, graphics and audio. The state of the art, however, is further advanced, integrating 3D virtual reality, sound, video, graphics and text in HTML derivatives such as VRML.

SGML, TEI, JPEG, MPEG, PDF and so on also inform multimedia electronic publishing decisions, yet despite being individually standardized do not always inter-operate smoothly with more exotic scientific data formats.

Return to the Table of Contents


Identification and transaction management

Reliable identification systems are fundamental to efficient transaction systems. Building an efficient global electronic transaction system requires globally reliable identification systems. Persistent identification is also crucial to the design and efficiency of archives.

Over the past two years it has become clear the technical challenges involved in permanently identifying and controlling the uses of digitised information in open communications networks can be solved. (1)

The real issue is not the technology but how to win agreement to cross-industry use of a particular protocol. How to choose and standardise a uniform data identification system and thereby provide the transactional heart for a highly granular, truly global networked multimedia information economy?

Return to the Table of Contents


STM publishing

The publishing industry generally and the STM sector in particular were early leaders in this domain. As one of the principal groups exposed to Internet based competition, they have much at stake.

Since 1994, following Charles Clark’s much quoted statement - “the answer to the machine is in the machine (2), there have been several studies undertaken examining various aspects of information identification and metering. (3-5)

Various groups in the publishing world, with particular reference to their own information identification needs, have proposed solutions. Recently Elsevier Science and others (including the American Chemical Society and the IEEE) have begun using a Publisher Item Identifier (PII) first defined by Dr. Norman Paskin.(6)

Work on a thorough revision of the ANSI/NISO z39.56 standard - the Serial Item and Contribution Identifier (SICI) - has been proceeding. (7)

The Association of American Publishers (AAP) has been working on a Uniform File Identifier (UFI) and is the delegated body handling activity in this area on behalf of the ANSI Information Infrastructure Standards Panel (IISP).

Publishing industry related standards bodies such as BISAC, EDItEUR, ICEDIS, NISO and SISAC are all actively developing a range of protocols to facilitate information industry trade.

Despite these efforts, the international publishing industry has been hesitant about bearing the cost and primary responsibility for co-ordinating a cross-industry coalition to agree a uniform system for data object identification and transaction. It has been felt that in a world where trillions of dollars move through financial markets daily, an industry with revenues of only $100 billion annually is unlikely to influence to overall outcome.

It does seem probable that, for the foreseeable future, systems will be required that provide a unifying transactional heart by translating and co-ordinating disparate industry based information and rights management systems into a common code.

Even arriving at a definition for the common code will not, however, be possible without the active participation of all the affected industries. No mating geese, no golden egg.

Return to the Table of Contents


Consequences of current industry- based practice

Electrons do not know the difference between a book and a film. Persisting with industry based identification schemes for the digital, machine readable representation of these artefacts suggests the world can still be neatly partitioned along industrial lines. It cannot.

Publishers and others are right to assume the emergence of network based middleware capable of translating embedded data object identification codes. They would be wrong, however, to assume it would not be cheaper and more efficient, even in the medium term, to commit to a system of globally unique identifiers. Such a system would need to be capable of simultaneously protecting their investment in ISBN/ISSN and related coding schema. It should also have the effect of making their works more readily accessible from a wider range of domains.

In the digital realm it is not just the design of identifiers themselves that is critical but also the encoding and decoding systems used to implement that design and the transaction systems built on those foundations.

Continuing to develop and implement industry-specific schemes for both the design of the identifiers and the methods of encoding and decoding data bearing these identifiers will isolate any industry from the benefits to be gained from the use of a common approach.

One of the key features of the universal data identification (UDID) system I specified was that the identifier form an integral part of the data structure. Remove or tamper with the identifier and a meaningless, valueless object would result. No code, no value.

As a practical example, in the tests done with musical material at the Imperial College in London, removal of the structural identifier resulted in a CD quality digital recording suddenly painfully resembling the sound of a very scratched vinyl disc. Yet even in the crispest silences the coding was inaudible to the most finely tuned ears. Indeed, specialists from the UK National Discography actually preferred the sound of material containing the code! Stripped of its identifier, however, the product became valueless.

It is not just the identifiers, but it may also be the way they are deployed, that will enable the design of robust, cheap, universally useful combinations of data protection, access control and transaction systems.

Return to the Table of Contents


Storage, archiving and protection of the global virtual archive

Another pressing question is how to preserve access to an archive generated using a multitude of devices and programmes, stored in arcane proprietary formats.

An attractive element of a persistent universal data object identification protocol is that it affords the chance to store information about the object in an evolving database separated from the object itself. It is possible to store all kinds of additional data about the object (so called meta-data), including its format, devices required for its reproduction and so on.

If we also decide to uniquely identify the applications and the devices required to run them, it becomes relatively simple to create sufficient device and application repositories on the network to enable reliable future reference. Without persistent identifiers providing ready reference to such repositories we stand to lose access to vast quantities of valuable information with each new generation of hardware and software.

Return to the Table of Contents


Barriers to a distributed information environment

1. Global scalability
The visions of the information future to which we commit should ideally be able to scale globally to meet both explosive real-time information demands and archival needs.

2. Interoperability
A great deal of attention has been focused on inter-operation and impressive strides are being made. The benefits from inter-operation are clear. The addition of new classes of inter-operating elements has a synergistic effect on the whole system. Platform independent object-oriented interactive programme languages (such as the Java language from Sun Microsystems offer substantial promise in this regard.

3. Digital : analogue
Human beings cannot directly process digital signals. In order for people to experience the content of a digital signal the digital information must be converted into analogue wave forms. This creates a considerable problem for protecting information assets. Many systems for the identification and protection of these assets in digital form are rendered ineffective by simple digital to analogue conversion. More sophisticated schemes can readily be broken by iterative digital to analogue to digital conversions.

An ISRC, for example, used to identify audio recordings, is lost if the signal is converted to analogue form. It is then possible to re-digitise the analogue signal, absent the ISRC. When digital audio was dramatically superior to the analogue conversion this was not such a problem. Now, however, up to thirty iterative digital to analogue to digital conversions cause little loss of signal quality. The music industry is currently working on a solution to this problem.

While glyphing systems and other bridging techniques, including biological processors, may offer reliable future solutions, in the interim it will be necessary to rely on proposed legislative support of technical systems to fill the gaps.

Return to the Table of Contents


Building diversity from uniformity: the crucial role of identifiers

In the natural world diversity is built on uniformity. A common code is the basis of the evolution of an amazing array of different species and a dazzling range of variation within each species. The entire genetic code of an individual living entity is folded into every one of its cells. Each cell forms to perform its unique function in the whole by reference to these instructions. There may be much to learn from biochemistry about data object management and persistent identification systems.

As we move into an era of molecular computing these lessons will become even more important. If valuable information replicates like a virus, how else, without a common molecular coding system, will we be able to trace and reward the creative beings in our midst for their inventions?

It may be possible in the foreseeable future to simply code each grain of intellectual property uniquely then allow it to replicate - unprotected and uncontrolled - empowering it to automatically return a constant stream of income to its creator and the market agents who encourage its replication.

This is the world promised by a uniform, structural data object identification system even in the existing electronic realm. In the coming molecular world it may be the only solution.

Return to the Table of Contents


Leading work: identifiers

A great deal of work has been done in recent decades on standardisation of identification in the electronic realm. Universal protocols for the identification, retrieval, transport and presentation of resources is one the cornerstones of the success of the World Wide Web. The now familiar slashed Uniform Resource Locator (URL) references, coupled with basic hypertext mark-up language (HTML) and its more exotic variants, have provided the navigation and presentation tools lacking in earlier generations of the Internet.

Over the past year the Internet Engineering Task Force (IETF) has battled with another aspect of uniform resource identification - Uniform Resource Names (URNs).

The Internet Engineering Task Force (IETF) Uniform Resource Identification (URI) Working Groups (8), set out to achieve a scheme for persistent identification of information resources in the Internet environment.

These working groups, while producing extremely valuable results and providing functionality to applications in the Internet domain, have found they do not have the necessary cross-industry connections nor understanding of all the related issues to make their proposed approaches universal across all sectors.

There has been vociferous debate within the IETF URN Working Group on the precise purpose of URNs. Only scant regard has been given to the need for IPR related data object identifiers to persist for well over a century. Indeed attention to IPR issues generally has not been a high priority.

Without an inclusive cross-industry mandate, addressing the many complex identification issues involved., any attempt to generate a de facto uniform data object identification standard is unlikely to succeed.

Return to the Table of Contents


The Common Information System (9)

The music and audio-visual industries are also investing heavily in a project known as the Common Information System (CIS), designed to enable global automation of rights management by 2000. Identifiers drive every aspect of the system. Global databases either already exist or are planned to enable identification of:

The CIS is open to the extent that it relies on standardisation of an International Standard Work Code (ISWC) which, it is intended, will be capable, with appropriate prefix changes, of identifying all kinds of copyright works. This including works currently numbered using the ISBN and ISSN systems.

ISWC standardisation is well advanced, with trials currently being conducted in Australia, Scandinavia and Ireland. UK trials will begin shortly.

The ISWC work is being done by the International Numbering Working Group of the BIEM/CISAC Information Systems Steering Committee after extensive consultations with copyright societies and copyright owners in many different territories.

The International Numbering Working Group is one of several groups working on different aspects of the CIS. CISAC in Paris is at the centre of this activity.

The ISWC will be particularly useful to the music and audio-visual industries. It plans to use a global works database, to be located at ASCAP in New York, to store information from local registries internationally.

The ISWC will be used by the industries as a bridge between the existing International Standard Recording Code (ISRC), used by the recording industry to identify individual audio- and video-gram recordings, and the NUI, used to identify audio-visual works, to an expanded database of Interested Parties (IP) and International Standard Agreement Numbers (ISAN), to be held at SUISA in Switzerland.

The ISRC already links to existing physical packages of recordings by way of product bar-codes.

The IP list will provide links to an International Standard Agreement Number (ISAN), detailing the terms of agreements. The CIS plan calls for all the various elements of the system to be in place and functioning by 2000.

CISAC and the IFPI are directly involved in the EC funded IMPRIMATUR project. CISAC is also a partner in the IMPRIMATUR related COPEARMS project.

The CIS initiative is not dependent on agreement with the controllers of other forms of content, although their involvement would be welcome. It stands out amongst the current activities in the identification arena as having solid support from its host industries. Whether this can be spread beyond the borders of the music and audio visual industries is still to be seen. Any move to integrate aspects of publishing industry rights management and data identification into the Common Information System framework should be encouraged.

Return to the Table of Contents


Elsewhere

Work is proceeding in other industries too. The Headers & Descriptors working group of the Society of Motion Picture and Television Engineers (SMPTE) has been working for some years on standardising Universal Labels for Unique Identification of Digital Data. Now in its fifth draft, this standard is designed to "function across all types of digital communications protocols and message structures, allowing the intermixture of data of any sort".

"This standard is also intended to serve as a model for other organisations what wish to label data in a manner that is universally unambiguous, globally unique, and traceable to the authorising organisation." [Proposed SMPTE Standard P18.010 Fifth Draft 20/8/95 p1].

The proposed SMPTE Universal Label, conforming to ISO/IEC standard 8824-1, Information Technology - Abstract Syntax Notation One (ASN.1), has three parts:

1) Network Object (identifying the location of the organisation defining the label within the ISO/IEC hierarchy)

2) Data Type or Identifier (defined by the defining organisation)

3) Structure (carrying the network object and identifier)

Also in the audio-visual world, the Motion Picture Experts Group (MPEG) and Digital Audio-visual Council (DAVIC) are moving towards standardisation of rights and content identifiers.

In the increasingly significant object based software domain the Object Management Group (OMG), with its 560 members, including Microsoft, IBM, Hewlett-Packard, AT& T, Apple, Unisys, Bellcore, BT, Fujitsu, Novell and many more impressive names, is considering whether to modify the Common Object Request Broker Architecture(CORBA).

This architecture currently uses an Object Reference as a unique identifier within a local system. The modification under consideration is whether to make the Object Reference globally unique.

In isolation this may not be very momentous. As part of an overall approach to data object identification it could have significant ramifications.

What is missing is global cross-industry co-ordination of these efforts. At the moment "universal" means "universal in our industrial context".

It is to be hoped the programme flowing from the recent joint ISO/IEC/ITU meeting on standards for the Global Information Infrastructure will provide a lead. There are already many other international projects underway, including the new EC projects IMPRIMATUR/COPEARMS and INFO2000, the American National Standards Institute (ANSI)Information Infrastructure Standards Panel (IISP) and the Information Industry Association hosted Digital Content Rights Management Group (DCRMG).

A lead needs to be taken to bring together these efforts with those of the many other bodies addressing these issues to ensure a well co-ordinated approach is taken to consensus building on universal data object identification and rights management standards.

Return to the Table of Contents


Electronic information commerce systems

There are several systems currently being proposed that might be used as the core of electronic information commerce systems in distributed network environments. Two stand out. They are:

Neither system is yet in commercial use although IBM’s infoMarket is operating a free information search and retrieval system via its web-site. IBM will commercially launch its Cryptolope technology and the infoMarket clearinghouse system in March 1996. A second version is already in development and is expected in June 1996. EPR has been marketing its technologies since October 1995.

EPR, which owns four existing patents, has lodged an application for a wide ranging foundation patent covering electronic information commerce and rights management systems. It runs to over 900 pages, one of the largest applications ever filed. The US Patent & Trade Mark Office is due to rule on it this year. If granted it can be expected to impact on the entire electronic information commerce industry.

The EPR system relies on securing the content and carrying the requisite commercial transaction control components, permissions software and so on, with the content object at all times in a container called a DigiBox.

Valuable data carried in these containers can only be expressed in the presence of the control objects, making so-called superdistribution of data objects feasible.

Return to the Table of Contents


Superdistribution

Superdistribution is seen as an ideal use of internetwork technology. It enables secure copies of valuable digital objects to travel freely from user to user while ensuring any “pass along” use of the object will result in proper payments to the rights owners. Such systems may be vital in enabling secure differential pricing in different national territories and market segments.

To achieve this result each use of a DigiBox contained object can be metered. In order to use such an object users would first obtain credit either directly from the supplier of the object or via a third party financial intermediary. Charges for metered uses of the object would then be deducted from credit residing on the local system until exhausted at which time further credit could be purchased via the telecommunications network.

EPR’s technologies are claimed to support almost any electronic publishing business model from subscription to pay-per-view.

Return to the Table of Contents


Payment assurance

IBM takes the view that the most important element is payment assurance. To this end its system delivers a secure cryptographic envelope (hence Cryptolope) containing the content. The envelope has a plain text header describing the contents and rights information and transaction terms. Several envelopes can be nested, one inside the other.

The plain text header can be read off-line. In order to open the envelope, however, it is necessary to be online to the infoMarket clearinghouse.

While connected to the clearinghouse, the client “cracks open” the Cryptolope, triggering a transaction event. In the present version, once the envelope is opened the file inside can then be treated as any other on the client’s system. This limitation will be addressed in later versions.

As the containers are designed to be programmable it will be possible to reflect changes in any aspect of the protection, control or presentation mechanisms carried with a particular object even when transmitted to other users. The EPR technology is already capable of achieving this objective and IBM’s system will reportedly address this need fully in the June 1996 release.

Return to the Table of Contents


Interoperability

One problem to be overcome is interoperability. EPR’s DigiBox containers cannot be used in the IBM infoMarket architecture and likewise the IBM Cryptolope technology does not operate in EPR’s InterTrust model. This means gateway interoperability is currently the only option if both these systems find a place in the market.

Late last year a Digital Content Rights Management Group (DCRMG) was established, under the auspices of the Information Industry Association (IIA), to address standards issues. As members, both IBM and EPR are committed to ensuring their systems comply once standards have been agreed.

In the meantime, however, IBM is moving ahead aggressively in the marketplace. As if to bear out the McKinsey advice “Do it, then fix it”, the General Manager of the infoMarket project, Mr. Jeff Crigler, makes it clear his company will take the technology they already have to market and then add functions as providers and users make clear what is required.

IBM are working with Sun, Netscape, Folio and Adobeto integrate their system into secure browsers and cross-platform presentation software such as Adobe’s Acrobat.

EPR is still hoping to achieve broad agreement on inter-operation standards prior the commercial launch of products using their technologies. However the breakdown of licensing negotiations with IBM and IBM’s determination to get on with the job may mean EPR will have to forego its pre-market standardization ambitions.

EPR’s InterTrust architecture and DigiBox solutions appear to be a generation ahead of IBM’s. It provides a permanent payment assurance, rights management and presentational container and transaction system. It does not rely on a contemporaneous network connection to a clearinghouse for either the first or subsequent transactions.

As hardware device operating systems and software applications can be designed to be aware of the DigiBox or Cryptolope containers, the technology partnerships each company forms will determine the eventual shape of the solution. So, too, will each company’s commercial success in winning the support of publishers and other controllers of valuable intellectual properties.

EPR do not plan to run a clearinghouse themselves. Their business plan involves the marketing of Software Developers’ Kits and the licensing of their technology in return for a very low royalty on each transaction facilitated by their technologies. They are yet to announce the extent of such relationships.

Return to the Table of Contents


Export/import restrictions

A more difficult hurdle still needs to be overcome. As these systems rely on highly secure cryptographic algorithms, some parts may be subject to controversial US export restrictions. Other countries may prohibit their importation, for similar reasons, thus stifling cross border use of these valuable tools. According to IBM, the technique they use to “insert” data objects into Cryptolopes is presently affected by these restrictions. Negotiations are underway to resolve these questions.

Folio and others in the Digital Content Rights Management Group are working with the various metering technology companies to provide the framework and standards for electronic information commerce in publishing.

Other companies such as Verifone and National Semiconductor’s iPower unit, with a background in electronic transaction systems, are also involved in building viable systems.

Market results to date suggest use metering generally is not a popular option with users. It is yet to be seen whether micro-payment transaction systems, combined with secure digital object containers can change this situation.

Return to the Table of Contents


Digital object fingerprinting and watermarking

Several techniques have been developed that enable owners of valuable data to verify the source of a particular copy of a digital object by reference to a hidden code, usually inserted using steganographic techniques. This is often cross referenced to the buyer at the time of first sale. Such methods have widespread application in the protection and control of data. IBM Corp., Cyphertech Systems Inc. and Highwater Designs Limited were among the early entrants in this field. NEC has also recently announced a secure spread spectrum invisible watermarking system for multimedia.

Return to the Table of Contents


Distributed trust

Another technology useful in research and digital library environments may be distributed trust systems such as Surety Technologies, Inc. digital notarization services. By using Surety’s software, parties to contracts concluded in open network environments can certify each others digital signatures and authenticate the source and integrity of the contents of a digital object, without the need to employ trusted third parties.

This is achieved by using a validation feature in the software. A certificate is generated by an external Internet-based validation server at the time the document is created. The validation software checks this and recomputes the digital fingerprint of the present document. If they match it is possible to prove the document has not been tampered with. Even the slightest alteration will vary the hash code.

Interestingly, Surety’s work was originally stimulated by the need to prove conclusively that an electronic record or digital document has not been modified. The company’s founders were particularly intrigued with the problem of validating research findings, following allegations that an MIT researcher had faked laboratory data for a Cell magazine article.

Return to the Table of Contents


Consensus and conflict

There are many aspects to the debate about the information society. Consensus in one area may lead to conflict in another. Re-shaping the world in the image of sub- atomic particles is bound to cause some aggravation!

Major attention needs to be given to the elements of a common technological platform, its design, implementation and administration.

Development of a comprehensive suite of tools to facilitate protection, control and presentation of data is well advanced. Standardization is beginning to occur in less dynamic sectors and in areas, such as set top boxes, digital video disks and micro- payment transaction protocols where the marketing of conflicting systems would prove uneconomic.

In global deployment of these tools and standards the really tough challenges lie at the interface between technology and society.

The techniques discussed impact on many sensitive policy issues:

Given the range of strongly held world-views expressed by different groups on these issues, a great deal of work needs to be done, both technically and politically, to resolve differences and so reveal the true extent of the achievable elements of the grand international information society vision.

Science has a history of cutting across these divides in the quest for knowledge. It is to be hoped the scientific community may lead the way to achieving such a result in the wider world.

Return to the Table of Contents


References

1. Douglas Armati, “A Uniform Approach to Identification of Digitised Copyright Content?”, STM Newsletter 95, November 1994, paper delivered Informal Part 26th STM General Assembly meeting, Frankfurt, October 1994.

2. Charles Clark, “The Publisher in the Electronic World”, report from the International Publishers Copyright Council for the Third IPA International Copyright Symposium, Turin, 23-25 May 1994.

3. Robert Weber, “Metering Technologies for Digital Intellectual Property”, report to the International Federation of Reproduction Rights Organisations Committee on New Technologies, October 1994.

4. Douglas Armati, “Information Identification”, report to the STM International Association of Scientific, Technical and Medical Publishers Task Force on Information Identifiers and Metering Systems in the Electronic Environment, March 1995.

5. Christopher Burns, “Copyright Management and the NII”, report to the Enabling Technologies Committee of the Association of American Publishers, May 1995.

6. Norman Paskin, “Publisher Item Identifier”. URL: http://www.elsevier.nl/info/epstand/pii.html

7. z39.56-1991 and z39.56-1995: ULR: http://www.faxon.com/standards/Z3956-SICI- Intro.html

8. Internet Engineering Task Force Uniform Resource Identifier Working Groups:

9. “The Common Information System - Building a data network for the 21st century”, plan proposed by the BIEM/CISAC Information Systems Steering Committee, December 1994.

Douglas Armati is an independent researcher, consultant and communicator. He is the author of 'Intellectual Property in Electronic Environments', to be published in May 1996 by Cambridge Market Intelligence (CMI), and the editor of 'Electronic Information', a monthly publication in the CMI Technology Watch series. In 1995 he authored the 'Information Identification' report for the Information Identifiers and Metering Systems Task Force of the STM international association of scientific, technical and medical publishers and undertook a study on Uniform File Identifiers for the Association of American Publishers. He consults primarily to international corporations, copyright collecting societies and trade associations.

Author:

Douglas Armati
7a Angel Lane
Woodbridge, Suffolk
IP12 4NG
England
Phone/Fax: +44 1394 380 874
Douglas Armati
© 1996 Douglas Armati


Last updated : April 03 1996 Copyright © 1995-1996
ICSU Press and individual authors. All rights reserved

Listing By Title

Listing by Author's Name, in alphabetical order

Return to the ICSU Press/UNESCO Conference Programme Homepage


University of Illinois at Urbana-Champaign
The Library of the University of Illinois at Urbana-Champaign
Comments to: Tim Cole
06.24.97 RL