Skip to Main Content

Find Case LawBeta

Judgments and decisions from 2001 onwards

Storage Computer Corp & Anor v Hitachi Data Systems Ltd

[2003] EWCA Civ 1155

Case No: A3/2002/2253
Neutral Citation Number: [2003] EWCA Civ 1155
IN THE SUPREME COURT OF JUDICATURE
COURT OF APPEAL (CIVIL DIVISION)

ON APPEAL FROM CHANCERY DIVISION

MR JUSTICE PUMFREY

Royal Courts of Justice

Strand,

London, WC2A 2LL

Wednesday 30th July 2003

Before :

LORD JUSTICE ALDOUS

LORD JUSTICE MANCE

and

MR JUSTICE JACOB

Between :

(1) STORAGE COMPUTER CORPORATION

(2) STORAGE COMPUTER UK LIMITED

Appellant

- and -

HITACHI DATA SYSTEMS LTD

Respondent

(Transcript of the Handed Down Judgment of

Smith Bernal Wordwave Limited, 190 Fleet Street

London EC4A 2AG

Tel No: 020 7421 4040, Fax No: 020 7831 8838

Official Shorthand Writers to the Court)

Simon Thorley QC (instructed by Bird & Bird) for the Appellant

The Respondents were not present or represented

Judgment

As Approved by the Court

Crown Copyright ©

1.

This is the judgment of the Court.

2.

Storage Computer Corporation are the proprietors of Patent EP 0294287 in respect of an invention entitled “Fault-tolerant, error-correcting storage system and method for storing digital information in such a storage system.” Their associated company, Storage Computer UK Limited, are exclusive licensees. There is no need to differentiate between them and we shall refer to them as Storage.

3.

Storage alleged that Hitachi Data Systems Ltd had infringed the patent. Pumfrey J in his judgment ([2002] EWHC 1776 (Ch)) held that the patent was not infringed. He also held that claims 1 and 2 (which stand or fall together) were invalid for obviousness.

4.

After the judgment was handed down on 2nd July 2002, but before the order was made Storage and Hitachi entered into a settlement agreement which included an obligation that Hitachi would take no further part in the proceedings or any appeal. However Hitachi did not consent to the appeal being allowed and the settlement agreement contained a number of important terms dependent upon the outcome of the appeal.

5.

Storage appealed contending that the judge was wrong both as to infringement and validity. Because of the terms of the settlement, Hitachi would take no part in the appeal. So this Court was confronted with a substantive appeal which would involve considering difficult technical matters without assistance of full argument. Such argument would need Counsel fully instructed so as to be capable of dealing with the technical matters. The appointment of an amicus could not be an adequate substitute for this.

6.

Having regard to the terms of the settlement agreement, we concluded that it was not appropriate to allow the appeal without argument. That placed a considerable burden on the Court despite the great assistance provided by Mr Thorley. He helped us through the technology, advanced his client’s case with clarity and skill while at the same time making sure that we had before us the full picture.

The Patent

7.

No criticism was made of the explanation given by the judge of the background to the patent. In our view it could not be bettered. For that reason we believe it right to set it out in full.

“2.

'287 is entitled 'Fault tolerant, error-correcting storage system and method for storing digital information in such a storage system'. It is accepted that the claims are entitled to priority from 2 June 1987. The substantial issue is infringement.

3.

This patent is concerned with irrecoverable hard disk-drive errors. Although the invention is stated to be equally applicable to other forms of storage, such as bubble memory and the like, it is principally concerned with disk drives. I shall explain the field of application by reference to hard disks of the kind which were in use for personal computers in 1987, since that is the class of disks with which the patent is particularly concerned. Such disks are often called Winchester disks, after the IBM Research Department at Hursley Park, Winchester, where they were invented. In such disk drives, there is one or more rotating platters made of magnetic material. The rotating platters are notionally divided into concentric tracks, each track being divided into sectors. If there is more than one platter, the vertically aligned tracks are considered to form a 'cylinder".

Data is written and read by magnetic heads which fly above the surfaces of the rotating platter, riding on the thin layer of air which is entrained by the rotating object. Normally there are two heads per platter, each head being mounted on an arm which may be moved in and out, towards and away from the hub of the disk, in reliance upon control signals.

4.

The internal electronics of such disks are remarkably complex. By 1987 it was well established that all hard disks intended for personal computers must present one of a comparatively limited number of standard interfaces to the hardware in the computer used to control them. The only such standard referred to in the patent is the so-called SCSI or small computer standard interface. The basic idea is that there is a SCSI controller ( 'host adapter') in the computer; a SCSI cable connecting the adapter to the drive; and the SCSI drive.

5.

The basic unit of storage of such a drive (at least so far as the host machine is concerned) is the sector. SCSI conceals much of what is actually going on in a disk drive, but the sector will be the minimum amount of data written or read in one go by the drive. The way to address the sectors is by cylinder, head and sector number. The cylinder says how far from the hub: the head identifies the particular platter side, and the sector identifies the particular part of that track.

6.

The SCSI interface is the face that the disk presents to the world. It is not safe to assume that a SCSI disk said to have (say) 256 cylinders, 64 sectors per track and 8 heads will in fact have such an arrangement internally, but this is what the electronics presents to the world. What: matters is the sector. Every sector read out will be the result of at least one read of a particular sector somewhere in the drive.

7.

Hard disk drives display two types of error, soft errors and hard errors. When a request is made to read or write the contents of a particular sector, the drive may not succeed first time. It will retry the operation a specified number of times. If it succeeds before reaching the maximum number of retries, the error is a soft error, and the drive may deal with it internally. If it fails altogether, that is a hard error. An error signal is returned to the computer, and the operation will have failed. The patent is principally concerned with recovery from hard errors in a particular drive.

8.

At the priority date, there is no doubt that it was well known to deal with hard errors by duplicating disks. Two disks containing duplicate data would be highly unlikely to display a hard error at the same sector. Duplicate disks thus greatly increase data security. The problem with duplication is that the data throughput is halved for every write operation, since two sectors have to be written for every one before. Data throughput for a read operation remains the same as for a single disk, since recourse will be had to the second disk only when the first fails. Professor Maller gave the following evidence in relation to the common general knowledge in the art which was not, I think, challenged:

'3.2.14 Since the number of read/write operations per second, sustainable by an individual disk unit, is limited for purely electro- mechanical reasons and has only risen from approximately 20 in the 1960s to roughly 50 today, it follows that only by using a large number of disks simultaneously can a high transaction rate can be sustained. Logical files can be spread over multiple disk units -- a facility which in the early 1980s was known by some people as syndicating. Syndicating offered the possibility of using individual disk channels concurrently to increase both the potential transaction rate, on a particular file, as well as the bulk data transfer rate. This facility could be achieved with exchangeable disk packs, of course, but then the packs, when taken off line, had to be kept together as an entity and, human error being what it is, one of them would be mislaid or used for something else! This idea of syndication on exchangeable disks led to the concept of an array of fixed disks on which files of data would be stored as a set of fixed sized "chunks" with each "chunk" being allocated to a separate disk. This arrangement later became known as striping. However, the term, striping, was not commonly known prior to 1987. Both the number of "chunks" and the stripe depth became parameters which could be chosen to suit particular applications. For example, arrays of fixed disks could be used for: high volume transaction processing: ie a system in which a large number of individual tasks, arrive at a high rate, each of which involves processing only a relatively small quantity of data; eg banking debiting and crediting and airline seat reservation systems (these would generally use a deep stripe of perhaps a disk track or a cylinder so that all the disk activity associated with one user transaction would be localized on one drive) transferring large volumes of data from individual files at very high speed -- an essential requirement in the fields of scientific computation and system simulation; eg weather forecasting or processing satellite data. (In these applications the stripe depth would normally be shallow; eg a single disk sector of perhaps 512 bytes.)

3.2.15

The concept of using multiple disk channels on the same disk drive had already been exploited in those applications where high data transfer rates were required; these ranged from large scientific computations to special purpose high speed machines for searching through large data files. One way of achieving this was by reading several tracks from the same cylinder in parallel (see paragraph [3.2.7] above). However, this technique had become difficult to implement on the highest capacity exchangeable disk units but was much easier on fixed disk units. But as fixed disk units had become a commodity item by the middle 1980s, it was much easier to run the units in parallel as a disk array, having stored the file across them, than to modify them by installing multiple read electronics to perform concurrent multi-head read.

3.2.16

Although striping data across disk arrays gave improved performance, it also had a severe disadvantage: if any one disk failed, the system would fail. The reliability of the system then depended upon the number of disks in the array. The greater the number of disks, the less reliable the system became. So "back-up" was essential to guard against lost data in the event of a disk failure and, in many scientific applications, the files were very large and mirroring would be very costly. "Back up" was often met pragmatically by using magnetic tape cartridges with a high data transfer rate. In some applications, however, an important requirement was speed in getting the information processed, eg weather forecasting, and the possibility of having to restore a disk from a cartridge tape in the middle of a run would have led to unacceptable delays. People would have had the weather before the forecast was complete! A requirement existed, therefore, for a method of error correction, which could ensure that a disk system could maintain a high average data transfer rate with any errors only causing a momentary hiatus.
3.2.17 The idea of using some form of error correcting code which could operate across an array of disks and correct data in the event of a disk failure "on the fly", as it were, was a topic of discussion and R&D during the mid 1980s, but was not widely known at the time. Since an individual disk unit could signal a read error (see description of CRC in paragraph [3.2.6] above), the use of parity across a disk array as an error correcting technique, which only added one more disk, provided a simple way of correcting any error, subject only to not more than one disk failing simultaneously. Parity as a concept had been well known for several decades (and is described in paragraph [3.2.19] below). However, as stated above, applying the technique to disk arrays was only the subject of discussion and R&D and was not widely known in June 1987.'”

8.

The judge then explained the disclosure in the patent in this way:

“9.

… it should be observed that the fundamental architecture of the computer to which the disk of the prior art and the disk of the invention are connected is shown very schematically in Figure 1. There is a User CPU (Central Processing Unit) connected to a 'System Control for Disk', or Disk Controller. This is then connected via an I/O cable to the disk itself. The Disk Controller is constructed and. programmed to expect the disk units to which it is connected to respond in a defined manner to the commands which it transmits to them. The hardware (number of wires in the cable, the shape of the connects and so on) and the software (the complete set of commands and permissible responses to them) are together called the interface.

10.

The rate of data transfer is the first of a number of problems with existing disk drives that the specification identifies. After a description of the Winchester disk drive, the specification continues (column 2 line 1) as follows:

'[0005] Such disk drives have numerous problems that have been tolerated to date for lack of any improvement being available. For one example, head and magnetic surfacing technology had developed such that higher packing densities on the disk ... are possible. That has permitted more sectors per cylinder and more cylinders per disk. This has provided higher capacities and higher speeds (relatively speaking); In this latter regard, while the electronics and other areas of disk drive technology have grown so as to permit vastly higher transfer rates, the physical rotational aspects have remained fixed so as to create a bottleneck to any meaningful increase in transfer rates ...'

11.

The specification deals with 'seek time'. This aspect of the way in which the disk mechanism works affects average transfer rate to the disk drive. As such, it is an aspect of the slowness of existing prior art disk drives:

[0006] Another limitation relative to prior art disk drives such as represented by the simplified drawings of Figures 1-3 is the "seek time" associated with physically moving the arms 36 and heads 34 in and out between selected cylinders. Particularly where movements are between radial extremes (ie between locations close adjacent the rotating center and periphery of the disk) the seek time for movement can be substantial and, such time is lost time when the disks 24 are rotating beneath the head but no reading or writing can take place ... To the System Control for Disk 14, BUS 12 and CPU 10, "seek time" appears as a wait state where no other useful work can be performed unit the disk request is completed. Seek time averages the majority of the; entire request cycle time, directly degrading the performance of CPU 10. The greater the number of I/O disk requests, the greater the degradation [of] system performance until an I/O or "disk bound" condition is reached at which point no greater system performance can be achieved.'

12.

In these two paragraphs, the inventor identifies straightforward physical constraints on the operation of a computer. It should be remembered that over the years the access time of small disk drives, of the kind proposed by the patent for use in the disk array, has not varied much (see the quotation from Professor Maller in paragraph 8 above), and Professor Katz used a typical figure of 30 I/O (input/output) operations per second. Rates of data transfer during such an operation have no doubt increased, but the purely physical problems of moving the arm rapidly over the surface of the disk and stopping it with immensely high precision in the right place represent a basic physical problem. These constraints are real, but the way in which they affect computer performance is not quite straightforward. For this purpose, a short diversion into how computers of the kind that is the subject of the specification works is appropriate.

13.

Generally speaking, computers run two kinds of programs. This is a consequence of the nature of the machines themselves. There are a huge number of operations, such as reading the keyboard, writing to the screen, writing to the disk and so on that are required by every program. These operations are grouped together into a piece of software called the operating system. Examples of particular relevance in this case are OS/390 (an IBM operating system for mainframe computers), Unix (an operating system for small and medium size computers) and Microsoft Windows (for personal computers). The operating system presents so-called applications programs or user programs with a defined software interface. Thus, an applications programmer can call the operating system to open a file on the hard disk, and read 50 bytes from the beginning of it. This may take one line: of source code. If the applications program had to contain all the software for dealing with the disks, it would contain many thousands of additional lines of code, and it would have to run on its own. No other program would know where the files were. So the operating system maintains all the information relating to files in one place and provides all the services for manipulating them to all applications programs. It is the operating system which enables a structured file system to be maintained. It is the operating system, and not the applications programs, which controls the Disk Controller.

14.

When an applications program wishes to write to a file on the disk, it calls the operating system, specifying the name of the file and the data to be written. The task of the operating system is to line the data up in the disk controller and instruct the controller to send it to the disk, to be written at sectors chosen by the operating system. The operating system maintains tables of all the file names known to it, and against each file name the: sector numbers) containing the data in the file. An operation involving specifying the name of the file, and relying on the operating system to work out where that file is on the disk, is called a logical file operation.

15.

In this sequence of transactions, the execution of the applications programs can be affected in two ways. I can take the example of a write operation. First, the application program can send the data to the operating system, and carry on. This is an asynchronous operation at: the logical level. An arrangement must then be made for the application program to be notified when the write is successful, or that it has failed. Alternatively, the application program can wait to know if the write was successful or failed, and then continue execution. This is synchronous operation at the logical level. Now consider the operating system. It too can send the data to the Disk Controller, and carry on without knowing if the write was successful, or it can wait until it knows. This is asynchronous and synchronous operation, respectively, at the physical level. Whether or not the operation is asynchronous or synchronous, the ultimate number of such transactions which can be performed per disk will be limited. If many disk transactions are required, further operations must be held up until the backlog (which is held in a buffer) has cleared. Anything which increases the rate of disk transactions is desirable.

16.

The second problem identified by the specification after speed is the problem of irrecoverable error.

‘[0007] Yet another detrimental aspect of prior art disk drive technology, which can best be appreciated with reference to Figure 4, is reliability with a corollary consideration of reconstructability; that is, how do we protect against lost data and can we reconstruct: lost data? With respect to the prior art, the answers are "poorly" and "no".'

Figure 4 shows four successive eight-bit bytes as actually recorded on a single disk. As described, each byte is accompanied by a parity bit, which denotes whether there is an odd or even number of 1s in the byte. This provides a simple check for odd numbers of errors. If a single error occurs, the parity bit will be 'wrong'. These four successive bytes are part of the data contained in one sector. The sector may contain 256, 512, 1024 or 2048 bytes, commonly. If the read operation fails for a parity error, the disk will return an error indication and null data to the Disk Controller. The specification also refers to the use of cyclic redundancy checks (CRC) 'associated with each transferred sector'.

17.

As I understand the evidence of Professor Maller, parity was not in fact used for every byte within a sector. Rather, a cyclic redundancy check was used on the contents of the whole sector. A CRC is an error detecting code that is recorded at the end of each sector of the disk. As the contents of the sector are read, the CRC is recalculated and compared with the recorded version. An error (or maybe more than one error) will be detected, but in essence the result will be like that described in the specification. If the maximum number of retries is reached without success, a complete failure is signalled.

18.

The way that this problem was dealt with is next described (column 3 line 45):

'Where it is desired and/or necessary to be able to reconstruct lost data, the prior art has relied upon costly and time consuming approaches like redundant disks and "backing up" or copying of the data and programs on the disk to another disk, tape or the like. In a redundant disk system, everything is duplicated dynamically with the intention that if one disk has an error, the data will still be available on the "duplicate" disk. Disregarding the cost factor, that philosophy is all well and good until a transcient [sic] voltage spike (a common source of disk errors) causes the same erroneous data to be written on both disks simultaneously. Backup systems have been used from the very beginning of computer usage ...'

19.

A third problem, hardly touched on in the evidence, is identified with respect to the Disk Controller:

'With respect to the prior art of controllers and storage devices, it should also be noted that all controllers are hardwired with respect to an associated storage device. If the size of the storage device is fixed, the controller associated with it has the size fixed in its internal logic. If the size of the storage device can vary within fixed limits and size increments, at best, the controller is able to query the storage device as to which model it is and select from pre-established sizes in its internal logic for the various models. There is no ability to automatically adapt to another size or kind of storage device other than that for which the controller was designed and constructed.'

20.

The inventor has thus identified three problems with existing disk drives. The first is slowness, made up of transfer rate and seek time between tracks. The second is vulnerability to irrecoverable errors. The third relates to the flexibility of disk controllers. He then sets out his description of the efforts which have been made to overcome those problems. First is seek time. He complains that it has not been recognised as relevant to the transfer rate problem, and the description which is given of attempts to overcome the problem relate to multiple disk/controller architectures. At paragraph [0010] he refers to parallel transfer drives, in which recording takes place simultaneously at several heads of the drive. For instance, each head may be responsible for one bit of each byte, there being eight heads. Such a drive would no doubt be intrinsically faster, if much more complex, than a serial drive in which each byte is recorded after the other by one head at a particular sector.

21.

At paragraph [0011 ] the inventor describes his concept of fault tolerance. He observes that in a fault tolerant system no single failure should be functionally apparent to the user. He says that five characteristics are responsible for fault tolerance. These are redundancy, detection, isolation, reconfiguration and repair.

'First, every element of the system must have a backup, so that if a component fails, there is another to assume its responsibilities. Secondly, a fault must be detectable by the system so that the fault can be identified and then repaired. Thirdly, the failed component must be isolated from the rest of the system so the failure of one component will not adversely affect any other component. Fourthly, the system must be able to reconfigure itself to eliminate effects from the failed component and to continue operation despite the failure. Finally, when repaired, the failed component must be brought back into service without causing any interruption in processing. With regard to present storage systems, the concept of fault tolerance simply does not exist. None of the five above enumerated characteristics are met. As described above, in a typical prior art disk storage system, a CRC error which is not a transient and therefore correctable by a reperformance of the operation results in a very apparent inability of the system to continue.
[0012] Wherefore, it is the principal object of the present invention to provide a new approach to controllers and associated storage devices such as disk drives and the like, which provides the benefits of parallel operation employing a plurality of individual devices operating in an intelligent environment making optimum use of their capabilities through the reduction of seek time and the like.
[0013] It is another object of the present intention to provide high capacity without the need to employ more exotic and high priced storage technologies.
[0014] It is yet another object of the present invention to provide fault tolerance, high reliability, and the ability to reconstruct lost data simply and easily.
[0015] It is still another object of the present invention to provide a new approach to storage system technology which dramatically reduces, and in some cases eliminates, the necessity for backing up the mass data storage system.
[0016] It is yet a further object of the present invention to permit vast increases in the transfer rates for data to and from a storage device beyond the limits normally imposed by speeds of rotation and seek times.
[0017] It is another object of the present invention to provide a heretofore non-existent device to be interposed between conventional computer storage device controllers and conventional storage devices which provides interface transparency on both sides and a communications and operation intelligence between the conventional devices.’

22.

The foregoing objects are all said to be achieved by a device according to claim 1, and a method of using such a device according to claim 2. The basis of the invention is described thus:

'[0023]The present invention is based on replacing the single prior art disk drive with a virtual disk drive comprised of a plurality of individual and separate conventional prior art disk drives for the data, and one additional disk dedicated to the containing of error recover code (ERC) bits associated with the data wherein the plurality of disk drives operate concurrently and intelligently in parallel.'

23.

The general principles of the invention are described by reference to Figure 5. To the computer, the virtual drive of the invention 'looks' just like a conventional disk drive. The Virtual Disk Controller is said to lie at the heart of the invention (col 9 line 26), and provides for interfacing the plurality of prior art drives, including the one said to have number ('#') E/R to the; computer's conventional disk controller. Figure 6 is an abstract of Figure 11, which shows the arrangement in more detail. For present purposes, it should only be noted that the whole is freestanding and independent of the host computer. What Figure 6 shows is the Virtual Disk Controller. Card 62 provides the interface to the host computer. Cards 48 provide the interface to the multiple disk drive units, and card 48' 'controls and detects failure of error/recovery disk 16". It should also be noted that a private bus 50 interconnects the individual controller cards for the drives. I shall describe the function of this bus below. The CPU 44 provides processing power for what is an intelligent device: and 64 is the 'cache memory' which caused a great deal of controversy at the trial.

24.

Before turning to the claim, it is helpful in this case briefly to describe the three embodiments. These throw light on certain of the very general statements that I have quoted from the earlier part of the specification relating to the manner the invention worked. The specification introduces them with a passage which explains their significance:

'According to the present invention, data (where the term "data" includes computer programs which, too, are nothing more than binary numbers to the disk drive) can be allocated to the parallel disk drives 16, 16' comprising the virtual disk drive 40 in several ways,, As with most aspects of computer technology there are tradeoffs in the present invention which occur relative to time, space and cost. Each manner of allocation is a separate embodiment of the present invention and provides certain advantages and disadvantages in this regard with respect to the other. Certain applications will best be served by one embodiment while others will operate best with another. Thus, the choice is which will best serve the end application. Several typical embodiments possible and the characteristics of each will now be described. Those skilled in the art will recognise that other possible configurations for the data beyond those to be described are possible within the scope and spirit of the present invention and, therefore, the specific examples to be described as not intended to be limiting in their effect.'

25.

The first embodiment is described by reference to Figures 7 and 8. The idea is that each bit of each incoming byte is written to a different disk, with a parity bit written to a ninth disk. This parity bit is the error/recovery bit. The buffer shown as 52 in the Figure holds the data as it is read and written. The determination of the location on each of the disks at which the data is to be written is the responsibility of the virtual disk controller logic. The user will ask for a particular sector. The various sectors which contain the bytes which contain the bits of the virtual sector requested are read into the buffer 52 if they are not already there. Equally, if the request is a write request for a particular virtual sector, the contents to be written will be loaded into the buffer, and written to each of the disks at its predetermined place.

26.

Errors are corrected as follows (column 11 line 56ff):

'[0029] ... The way this works employing prior art disk drives which could not individually accomplish the same thing can be understood by comparing Figures 3 and 4 to Figure 5 [sic: this obviously means Figure 7]. In prior art disk drive 16 containing the data of Figure 4, if the first byte (010101010) [sic: this is not a byte: it is one byte 01010101 plus a parity bit 0] drops a bit and now contains, for example, 010101000, the three "1" bits is odd in number and a parity error within the first byte will cause a CRC error in the sector integrity. The logic, however, does not know which bit position is involved and cannot take corrective action. Consider the same failure in the virtual disk drive 40 as depicted in Figure 7. The data within "Disk 2" representing the bit stream of bit 2 is still maintained in eight bit bytes with an associated parity bit since it is a "standard" prior art disk drive. Thus, the reconstruction logic of the present invention is informed of two facts. First, that Disk 2 had a CRC error in reading the sector which contained the bit 2 bit for the first byte, ie that it is the Disk 2 bit position (ie bit 2) which is in error. Second, that the error/recovery bit test across the first byte (010101010) is in error since (010101000) was read. Since bit 2 of the first byte is reading as "0" and is in error, in a binary system it can only correctly be a "1". By making that correction, the erroneous first byte is dynamically corrected from 010101000 to 010101010. In actual practice, this is accomplished by simply logically XORing the contents of the bit position and its corresponding error/recovery bit together in a manner well known in the art. Note that if it is the error/recovery bit drive, ie Disk E/R, which fails, the correction takes place in the same manner.'

27.

The first embodiment has a number of aspects which are common to all three. The first is that the error/recovery information, in this embodiment one parity bit per byte, is stored on a single dedicated disk drive. The data bits are spread amongst a number of individual disks, here eight. The description makes it clear that the user interfaces with the buffer 52, that is, the user is not concerned with the actual reading and writing of data to the disk, which is controlled asynchronously by the virtual disk control logic. It is pointed out that a disk can be removed and replaced with another and that the contents of the disk will be automatically restored during use. Finally, the inventor makes clear his interest in maximising performance:

'In this embodiment, maximum speed is sacrificed for simplicity of control logic and lower cost.'

The reason for the sacrifice is not far to seek. To read a single byte requires reads from all nine drives, and to write a single byte involves a write to all nine drives, during which time the data in the buffer corresponding to the byte to be written must not change, or its integrity is lost. Thus the system must wait until all the necessary writes have taken place.

28.

The second embodiment (Figures 9 and 10) is said to be based on 'the principle of performance maximisation, ie reduction of seek time, etc.' Here the bytes are not split up. Sectors (ie 256, 512, 1024 ... bytes) are written in turn to the drives. The error recovery information is derived on a per- sector basis from all the corresponding sectors. Thus, if there are six drives, six successive sectors of data might be written to the 200th sector of each drive. At each write, the error/recovery information is derived from the contents of all six corresponding sectors and written to a dedicated error/recovery data disk. The buffer is large enough to contain at least the data from seven sectors (that is, six data sectors and a sector of error/recovery information) and the latter is calculated continuously from the contents of the others.

29.

Professor Maller and Professor Katz identified errors in Figures 9 and 10. Although described as showing bytes of data, they in fact show a byte and a parity bit for that byte, or nine bits in all. This error is probably related to the discussion of parity in the introductory part of the specification to which I have already referred, where the inventor suggests that the disk has recorded on it parity bits for each individual byte as well as CRC's for each sector. It does not matter, since the principle is simple enough.

30.

In this embodiment, every write of a sector involves a write to two disks, a data disk and the error recovery disk. During every write of a data sector, the corresponding error/recovery information must not change until it is written. This is much better than the write to eight disks involved in the first embodiment. Equally, reads involve only one disk, that containing the requested sector, unless an irrecoverable error occurs, when data from all the corresponding sectors of the data disks and the error/recovery disk are read so as to reconstruct the missing data in the same way as I have already described. Again, the failed disk will be identified, and the data can be reconstructed in the same way.

31.

The final embodiment is concerned with applications which require large reads and writes. The unit is now the cylinder. The description of this embodiment is by reference to Figures 12 and 13. It does not call for separate comment, save to note that the overall structure is still that of Figure 6, in which the drive 16' is employed to store the error/recovery information, which in this embodiment is cylinder based.

32.

That concludes the part of the specification which is concerned with the layout of the data between various disks. The manner in which the Virtual Disk Controller is structured is now discussed, by reference to Figures 11 and 14. The Figure 11 controller is intended to operate in the sector-based system of Figures 9 and 10. It will be observed that the clear distinction between the data disks 16 and the error/recovery disk 16' that is maintained throughout the description of the embodiments of the invention is again maintained here. Each disk drive has associated with it a device controller (60, 60') that presents an interface appropriate to that disk. Probably the device controllers should be connected directly to the boxes labelled DMA (Direct Memory Access) but that does not matter. The device controllers with DMA read and write to the data buffers. The E/R disk controller has associated with it Master E/R logic and Reconstruct logic.

33.

The function of the Master E/R logic is to perform the generation of the error/recovery bits. It communicates with each of the E/R logic whose function is to detect alteration in data as it is written to a particular drive. When alteration is detected, the corresponding bit on the error/recovery disk must be altered as well, and this is the function of the master logic.

34.

The reconstruct logic responds to a detected error, by XORing corresponding data bits from all the drives apart from that in error to regenerate the erroneous data.

35.

The passage relating to Figure 14 and the cache memory 64 of Figure 11 is of considerable importance. The cache memory stores data written from and requested by the system over the system disk controller interface. So far as writes are concerned, the idea is that the user will consider that a 'write' has occurred once it has transmitted the data over the interface with the address or addresses on the virtual disk to which it is to be written.

'It is into the memory 64 that asynchronously read sector data is moved when the virtual disk drive 40 is operating in the manner as described with respect to Figures 9 and 10. In this regard memory 64 is an asynchronous queue for the movement of data to and from the disk drives 16. To maximise the performance increases possible with the present invention, when the user CPU 54 presents a block of data to be written to "disk" (ie the virtual disk drive 40 which is transparent to him) the data is moved into an available area of the memory 64 and an immediate acknowledgement made to the user CPU 54. Thus the user CPU believes that the requested disk write has been accomplished.'

36.

Thus the first aspect of the improvement of performance is the immediate return from a write operation which the user CPU sees. What happens to the data in the cache memory 64? It is written to the appropriate drive as it becomes free:

'The actual write to the appropriate disk drive 16 for the sector involved takes place whenever possible thereafter. The logic of the CPU 44 in the interface and control portion 56 asynchronously writes from the memory 64 into the appropriate data buffer 68 when it is next available for a write to disk.'

37.

The specification says that this has a particular effect on transfers:

'In this regard the logic maximises the transfers out of the memory 64 without regard to traditional FIFO [first in, first out] and LIFO [last in, first out] procedures. Rather it attempts to keep disk transfers maximized by writing out the best data for minimising seek. times and employing disk drives which would otherwise be idle.'

It should be noted that no algorithms for minimising seek times or for employing disk drives which would otherwise be idle are actually disclosed.

38.

A further utility of the memory 64 is described in the paragraph [0040] of the specification. This is simply stated. Since writes to disk take place asynchronously, it is possible that a read is requested from data which is in the queue to be written but has not yet been written. Alternatively a further write request is made which would overwrite data already enqueued. In either case, the necessary operation can be carried out on the data in the cache without a disk access at all.

39.

The nature of the optimisation which is achievable by use of the CPU 44 and its associated cache memory is again described in paragraph [0041 J. I shall quote the most material passage.

'In this regard, in addition to the fact that a plurality of individual disk drives are employed and the fact that detection and reconfiguration of lost data is possible, the most important factor of the present invention is the incorporation of a microcomputer to intelligently and efficiently optimise all the mechanical movements of the individual drives. As can be appreciated, this is a two edged sword; that is, there must be the individual disk drives with their separately positionable mechanical mechanism and there must be intelligence in the manner in which the drives are positioned. In the present invention, the CPU 44 is able to concurrently allocate the read/write operations to the various disk 16, 16' in the most optimum manner, looking for operations that maximise efficiency. For example, in a conventional disk drive, operations are performed sequentially. By contrast, in the present invention, the intelligence of the logic contained within the CPU 44 is designed to concurrently and asynchronously employ the various drives 16, 16' (and the cache memory 64) to maximise efficiency. For example, if drive "n" is at cylinder 13 and there is a request queued for the same drive at a nearby cylinder, the CPU 44 can be programmed to perform that request prior to one requiring that the arm and head assembly move to a more removed position. Again the various possibilities for the "intelligence" of the CPU 44 made possible by the unique structure of the virtual disk of the present invention providing for true concurrent operation are largely a function of the application to which it is applied. In some applications, for example, sequential operation might be a necessity and the above-described example of taking requests out of turn to take advantage of cylinder positioning might not be desirable.'

40.

I think that is fair to say that this passage indicates possibilities rather than sets out to provide solutions. The concluding sentences of the passage I have quoted make it clear that what is to be seen as an optimal arrangement necessarily depends upon the intended application. Thus the specification should be read as disclosing a device which is susceptible to optimisation. Such optimisation is not straightforward: Professor Maller accepted that it could be described as onerous.

41.

Claim 1 is as follows:

'A storage device system for computers capable of dynamically and transparently reconstructing lost data, comprising:
(a) a plurality of first individual storage devices (16) for storing digital information;
(b) a second individual storage device (16') for storing error/recovery code bits;
(c) means for generating and storing error/recovery code bits in said second individual storage device (16') according to a predefined error/recovery code checking algorithm for said digital information at corresponding respective bit positions across said plurality of first individual storage devices (16); and
(d) means for using said error recovery code bits in combination with the contents of said corresponding respective bit positions across said plurality of first individual storage devices (16) to reconstruct a changed bit in error in said digital information according to said error/recovery code checking algorithm when one of said first and second individual storage devices (16, 16') detects an error during the transfer of said digital information;
(e) interface means (46, 56) disposed for receiving read and write requests from a user CPU (10); and
(f) a plurality of storage device controller means (60') connected between said interface means and respective ones of said plurality of storage devices (16) for interfacing with said plurality of storage devices (16) and operating them concurrently characterised in that
(I) said interface means (56) comprises
(i) a buffer memory (64) for storing data of write requests from said user CPU (10) and writing said data from said buffer memory (64) to said device controller means (60') and said storage devices (16) asynchronously in time and sequence with respect to said write requests,
(ii) said CPU means (44) and said buffer memory (64) being designed to concurrently and asynchronously allocate read/write operations to the various individual storage devices (16) looking for operations that maximise efficiency; and
(II) and said interface means (46, 56) comprising CPU means (44)
(i) said CPU means (44) including a logic (48) for checking data in said buffer memory (64) and indicating such data as having been read from one of said individual storage devices (16) or as having already been queued to be written to one of said individual storage devices (16), said data being read from said buffer memory (64) without an actual read from said individual storage device when a read request therefor is received from said user CPU (10), whereby said buffer memory (64) acts as a cache memory in such cases;
(ii) said CPU means (44) including logic for immediately acknowledging a write to one of said individual storage devices (16) upon the data to be written being placed in said buffer (64), the user CPU (10) believing the disk write operation to have been accomplished.'

(We have added the roman numerals to aid analysis.)”

Validity

9.

Various allegations of invalidity were raised before the judge. He dismissed them all save for the allegation that claims 1 and 2 of the patent were obvious. As required by section 1(1)(b) of the Patents Act 1977 a patent may be granted only for an invention if it involves an inventive step. Section 3 states that an invention shall be taken to include an inventive step if it is not obvious to “a person skilled in the art having regard to any matter which forms part of the state of the art …”.

10.

The judge’s reasoning for concluding that claim 1 was obvious was set out in paragraphs 78 to 81 of his judgment.

“78.

However, it remains the case that the precharacterising portions of this claim are old. They are published in Ouchi as I have indicated. Professor Maller accepted that the underlying concepts of IBM 1 were well established:

'Q. Do you agree that this is disclosing, if it was not indeed already known, using read accesses to an idle disk if the other disk is busy? A. Yes, standard practice.
Q. In the next paragraph you agree it tells us what you may say was very well known, using a high speed nonvolatile memory such as a cache or writes to a disk? A. Yes.
Q. It is telling us in the last sentence, doing it asynchronously? A. Yes.
Q. I am not going to read out the next two paragraphs, they have very long. By all means refresh your memory if you want to. This is telling us, if we did not know it already, of the concept of fast read, fast write? A. Yes.
Q. Bearing that in mind, I want you now to go back to what you were calling feature Beta [the characterising portion] in the claim. For this purpose you will need to go back to the patent. I am not concerned so much with the language of the claim, but what according to one view, is distilled out as the concept of it. The way that I put it to you earlier, I believe you are agreed, to assume anyway, was that you have intelligence, so as to maximise efficiency, and you have a buffer or cache arrangement, and you have a fast read, fast write facility, and I suggest to you that if a person who was experienced, skilled in mass storage, had read IBM 1 in June 1987, whatever else may have been imparted to him, those concepts which I believe were Beta, if he did not know them already he would know them after reading that document? A. Yes, I mean, the underlying concepts there I would consider to be well established.'

79.

It must be understood that IBM 1 does not disclose a disk array. No questions that I can find were put on the basis of the disclosure in Timsit and Ouchi, which were to all intents and purposes abandoned. The only starting point, therefore, is the precharacterising portion of the claim which is admitted (rightly) to be old in the body of the specification. A cache is common general knowledge.

80.

Thus, we have a disk array in combination with a common general knowledge cache used for the purposes discussed in the extract from the cross-examination above, which are themselves common general knowledge. Professor Katz took the view that the employment of such a cache was obvious with a multiple disk system: see his answer at page 922 line 12. Professor Katz did not himself get round to using an intelligent cache until 1991, and considerable work on RAID.

81.

Taking the evidence as a whole, the difference between claim 1 of the patent and the admittedly old lies in the; characterising portion of the claim. Viewed from the perspective of the skilled man in the art, such apparatus is in principle part of the common general knowledge and disclosed in, for example, IBM 1. I think that this is an obvious step to take. Claims 1 and 2 are obvious over the admitted prior art and the common general knowledge.”

11.

Mr Thorley at the outset of his submissions on validity submitted that the judge had failed to understand Hitachi’s case. He drew to our attention the pleading which contained allegations that the patent was obvious having regard to a number of pleaded items of prior art including European patent 0156724 (Timsit), a technical bulletin of IBM (IBM 1) and a paper referred to as Ouchi. However Hitachi’s case had, he submitted, been significantly modified. He referred to statements made by Mr Prescott QC, who appeared for Hitachi before the judge (Evidence 1 pages 12 and 13) and page 16 of the Hitachi closing written skeleton argument. Those statements, Mr Thorley submitted, showed that Timsit and Ouchi, referred to by the judge in paragraph 79 of his judgment, had not being relied on by Hitachi as prior art which would form the “state of the art” having regard to which claim 1 was obvious. That left only IBM 1 as a relevant piece of prior art. That was not the starting point relied on by the judge and in any case could not upon the evidence provide a foundation for a conclusion that the patent was obvious.

12.

We do not believe that that submission fairly reflected the submissions made by Mr Prescott on behalf of Hitachi. At Evidence 1 page 11 Mr Prescott said that it was not disputed that the precharacterising part of claim 1 was not new. He then said that he would endeavour to put his case in such a way that it was not necessary to go to Timsit or Ouchi. As appeared from the skeleton and from his submissions (Evidence 9 pages 1195 to 1203) his case started from the statements in paragraph 18 of the patent which acknowledged that the disclosure in Timsit was “of a storage device system in the form of a fault-tolerant Winchester type disk system, having the features of the pre-characterising portion of claim 1 below.” That acknowledgement, Mr Prescott believed, meant that there was no need to go to Timsit. He proposed to base his case on an admission which was not in dispute and in any case was an admission against interest. Although Ouchi was not acknowledged in the patent to disclose a storage computer having the precharacterising features of claim 1, there was, it seems, no dispute that it also disclosed the precharacterising features in claim 1. Starting from that prior art, the deficiencies were those set out in paragraphs 11 to 17 of the patent (see paragraph 21 of the judge’s judgment cited above). The solution was obvious having regard to the disclosure in IBM 1 which was alleged to be common general knowledge.

13.

Mr Thorley did not accept that the disclosure in IBM 1 formed part of the common general knowledge and we shall deal later in this judgment with his submission. But he raised a more fundamental objection to the way that Mr Prescott had argued Hitachi’s case which appears to have been accepted by the judge. He submitted that it was not permissible to look solely at an acknowledgement of prior art. The statutory test required that obviousness should be judged having regard to matter which formed part of the state of the art. That meant that it had to be judged having regard to Timsit itself which, according to Professor Maller, described “a very complex system for storing data across multiple storage devices.” When first read, he thought that the system of operation as described was baroque and over-complex. He would therefore have rejected it as a starting point. Professor Katz said that Professor Maller had taken an overly detailed view of the architecture of Timsit as a starting point to reach the 287 patent. In his view the importance of Timsit was that it showed how a Raid 2 System (a system with a disk array) might work, using parity, thereby also suggesting Raid 3 (a parity-protected disk array with block-interleaved data).

14.

We accept that the 1977 Act requires obviousness to be judged having regard to matter which formed part of the state of the art. We reject the submission that Hitachi gave up the pleaded case that claim 1 was obvious having regard to Timsit. Their case was that there was no need to actually look at Timsit or Ouchi as there was an admission in the patent that Timsit was prior art, that it disclosed the precharacterising features of claim 1 and there was no dispute as to the disclosure in Ouchi. That was sufficient to provide a foundation for an obviousness argument starting from that prior art. If Storage wished to contend that the statement in the patent did not provide an adequate statement of what was disclosed in Timsit, then it was for them to lead the appropriate evidence.

15.

Did the disclosure in IBM 1 form part of the common general knowledge? The judge held in paragraph 78 of his judgment that the “underlying concepts of IBM 1 were well established”. Mr Thorley submitted that even if that was so that did not mean that the combination of the particular concepts disclosed in IBM 1 was common general knowledge. Far less did it mean that the concepts as combined and implemented in the characterising portion of claim 1 formed part of the common general knowledge. In theory he is correct. However the evidence established that the concepts in IBM 1 individually and in combination were common general knowledge at the date of the patent: of course limited to a protocol for controlling the operation of two DASDs and related caches in a duplexed and mirrored DASD pair. Professor Maller said in his witness statement that the underlying principles of using cached disks in the way described in IBM 1 was well understood by the skilled person in June 1987. In his cross-examination (see paragraph 78 of the judge’s judgment which is set out above) he accepted that the underlying concepts were well established. He did not seek to say only individually.

16.

Professor Maller and Professor Katz agreed that the concepts disclosed in IBM 1 were not applied to a disk array as required by claim 1. They disagreed as to the effect of the two caches used in IBM 1. There is no need to resolve that dispute. However it is important to keep in mind the submission of Mr Thorley that IBM 1 did not disclose caches with the intelligence needed to satisfy the characterising features of claim 1. In particular the caches did not generate parity data and the ability to update such data.

17.

The judge in paragraph 75 of his judgment said that it was often helpful to adopt the structured approach suggested by Oliver LJ in Windsurfing International Inc v Tabur Marine (Great Britain) Ltd [1985] RPC 59 at 73. We agree. Therefore it is necessary to take the four steps suggested.

18.

The inventive concept is that called for in claim 1. It is not necessary to set out all the common general knowledge that would form part of the mantle of the ordinary skilled person. It is sufficient to note that the concepts in IBM 1 were included within that knowledge.

19.

The third step is to identify what, if any, differences exist between the matter cited as being part of the state of the art and the alleged invention. We have already considered what was the state of the art relied on. In essence it was Timsit as acknowledged to be prior disclosure. That was stated in the patent to disclose the precharacterising features of claim 1. Thus the differences are those contained in the characterising features of the claim. The same applied to Ouchi.

20.

The fourth step requires the court to ask itself whether, viewed without any knowledge of the alleged invention, the differences constitute steps which would have been obvious to the notional skilled person or whether they required a degree of invention.

21.

The case for Hitachi was that once it was known to have a storage device system for a computer capable of dynamic and transparent reconstruction of lost data having features (a) to (f) of claim 1 (acknowledged in the patent to be known), there could be no invention in applying the concepts disclosed in IBM 1 to such a device. IBM 1 disclosed the use of intelligence for controlling the operation of two DASDs in a duplexed and mirrored pair and it was obvious to use more intelligence in the known system having features (a) to (f) of claim 1. That was particularly the case as the mechanics of how to do it were accepted by Storage to be within the capabilities of the skilled person once the idea was perceived.

22.

The case for Storage was that the disclosure in Timsit was confused and somewhat baroque and the skilled person would not start from Timsit. That was rejected by the judge who in paragraph 81 of his judgment (see paragraph 10 above) concluded that claim 1 was obvious.

23.

In Biogen Inc v Medeva Plc [1997] RPC 1 at 45, the court drew attention to the need for appellate caution in reversing the judge’s evaluation.

“Where the application of a legal standard such as negligence or obviousness involves no question of principle but is simply a matter of degree, an appellate court should be very cautious in differing from the judge’s evaluation.”

24.

There was in our view evidence upon which the judge could have come to the conclusion that he did. Professor Katz in his second report considered that the cache in IBM 1 could be considered as a single cache. He accepted that IBM 1 was for a mirrored disk. A cache required intelligence and once the system in features (a) to (f) of the claim was known, there was no material difference in scheduling to an array of disks than to a single disk. Intelligence was needed and there was no difficulty in providing the intelligence for the characterising portion of claim 1.

25.

In cross-examination Mr Thorley asked Professor Katz about what was not disclosed in IBM 1. At Evidence 7 page 91 line 16 Mr Thorley asked:

“Q. The question I put to you was that if you were going to design an array of disks, the single virtual storage unit array of disks, you would have to go back to the drawing board?

A.

I do not see why.

Q.

How would you implement it using the architecture shown in IBM 1?

A.

I admit that the architecture shown here is given from the perspective of two disks. It does teach me about the nature of careful updates for data and its redundancy. It tells me about how to trickle information out from the non-volatile portion of the cache (subject to those constraints) to the attached disks. To apply that to a disk array, I do not think would require going back to the drawing board, but would involve having maybe a common cache, a unified cache. It would involve some extension but all of the principles here really could be used in such a system. There would be a foundation for that.

Q.

Once you have decided to design a system, some of the ideas in this paper might be of use to you?

A.

Yes, but again it is really speaking to me about “I have data and I have to write something else in order to keep the data consistent and how do I organise a memory system to support that?” It does not lead directly to parity supported redundancy and management, but if I combine the need for parity with data that I might have in a multidisk system, I would use the teachings of this disclosure in such a system.”

26.

The judge was entitled to conclude that it was Professor Katz’s evidence that once it was known to have a system with the features (a) to (f) of claim 1 it was obvious to apply to them the concepts of IBM 1. Of necessity that would require further intelligence but how to do that was within the competence of the notional skilled person.

27.

Mr Thorley sought to distance claim 1 from the conclusion reached by the judge. He relied upon the evidence of Professor Maller to which we have referred that Timsit would not have been a starting point for development. He also pointed out that Ouchi was a relatively old piece of prior art. However his difficulty is the acknowledgement in the patent that Timsit disclosed features (a) to (f) of claim 1. It follows that the skilled person even taking account of Professor Maller’s views, would realise that those features were not new. That leads to the need to answer the fourth question of Windsurfing and the dispute as to whether it was obvious to use and develop concepts in IBM 1.

28.

We conclude that the judge was entitled to come to the conclusion that he did and therefore we dismiss the appeal on validity. It follows that there is no need to decide whether the Hitachi device fell within claim 1.

Order: Appeal dismissed; no order as to costs.

(Order does not form part of the approved judgment)

Storage Computer Corp & Anor v Hitachi Data Systems Ltd

[2003] EWCA Civ 1155

Download options

Download this judgment as a PDF (372.8 KB)

The original format of the judgment as handed down by the court, for printing and downloading.

Download this judgment as XML

The judgment in machine-readable LegalDocML format for developers, data scientists and researchers.