Skip to Main Content

Find Case LawBeta

Judgments and decisions from 2001 onwards

Commscope Technologies LLC v SOLiD Technologies, Inc.

[2022] EWHC 769 (Pat)

Neutral Citation Number: [2022] EWHC 769 (Pat)
Case No: HP-2020-000017
IN THE HIGH COURT OF JUSTICE
BUSINESS AND PROPERTY COURTS OF ENGLAND AND WALES
INTELLECTUAL PROPERTY LIST (ChD)
PATENTS COURT

Rolls Building

Fetter Lane

London, EC4A 1NL

1st April 2022

Before :

MR JUSTICE MELLOR

Between :

COMMSCOPE TECHNOLOGIES LLC

Claimant

- and -

SOLiD TECHNOLOGIES, INC.

Defendant

James Abrahams QC and Ben Longstaff (instructed by Powell Gilbert LLP) for the Claimant

Hugo Cuddigan QC and Edward Cronan (instructed by Jones Day) for the Defendant

Hearing dates: 7th-9th & 14th December 2021

APPROVED JUDGMENT

I direct that pursuant to CPR PD 39A para 6.1 no official shorthand note shall be taken of this Judgment and that copies of this version as handed down may be treated as authentic.

COVID-19: This judgment was handed down remotely by circulation to the parties’ representatives by email. It will also be released for publication on BAILII and other websites. The date and time for hand-down is deemed to be Friday 1st April 2022 at 10.30 am.

.............................

THE HON MR JUSTICE MELLOR

Mr Justice Mellor:

INTRODUCTION 2

The issues for decision 3

The Expert Witnesses 4

The skilled person 5

Technical background 6

Cellular wireless communication 6

RF signals 8

Modulation of radio carriers 9

Frequency shifting 11

Modulation and frequency shifting in more detail 11

Sampling theory 14

Digital representation of samples 15

Processing of digital samples 17

Distributed antenna systems 20

Common General Knowledge 22

CGK points in dispute 22

Network architectures 23

The LGCell DAS product 25

Techniques for handling overflow 29

Digitisation capacity/bandwidth capacity for ADCs 30

THE PATENT 31

Issues of Construction 40

(i) ‘to communicative couple’/ ‘communicatively coupled’ 40

(ii) ‘comprising’/ ‘comprise’ 40

(iii) ‘an original forward-path analogue radio frequency signal’ / ‘an analog-to-digital converter to convert the original forward-path analogue radio frequency signal’ 40

(iv) ‘a digital-to- analogue converter to convert the summed reverse-path digital samples to a reconstructed reverse-path analogue radio frequency signal’ 41

(v) ‘a respective original reverse-path analogue radio frequency signal comprising a reverse-path radio frequency spectrum’ 41

VALIDITY 42

Legal principles 42

THE PRIOR ART 43

What does Oh disclose to the skilled person? 52

Does Oh anticipate claim 1 of EP850? 56

Was claim 1 of EP850 obvious over Oh? 56

Was claim 7 of EP850 obvious over Oh? 57

INFRINGEMENT 59

CONCLUSIONS ON EP850 59

THE APPLICATION TO AMEND EP626 60

Jurisdiction 61

INTRODUCTION

1.

This is the trial of a patent action concerned with distributed antenna systems or DAS for short. A DAS is used in a cellular wireless communication system (in the forward or downlink direction) to distribute radio frequency (“RF”) signals transmitted from a base station into areas where the signal has been significantly attenuated so that e.g. mobile devices in such areas can continue to operate effectively. A DAS is bi-directional, so (in the reverse or uplink direction) it is used to gather wireless signals from e.g. mobile devices and transmit them back to the base station.

2.

DAS were known, but the patent with which this judgment is principally concerned, EP(UK) 2 290 850 (‘EP850’) provides a digital DAS, in the sense that the internal handling of the analogue RF signals is digitised.

The issues for decision

3.

Originally, the Claimant (‘CommScope’) sued the Defendant (‘SOLiD’) for infringement of two patents: EP850 and EP(UK) 1 570 626 (‘EP626’), SOLiD counterclaimed for invalidity, and CommScope reacted by applying to amend each of the patents.

4.

The action for infringement of EP626 and the corresponding counterclaim to invalidate it have been settled on terms, leaving outstanding CommScope’s application to amend EP626 which I have to discuss later, and the claims in relation to EP850. Those have narrowed in that:

i)

CommScope has abandoned its conditional application to amend EP850;

ii)

CommScope agrees that the trial can focus on claims 1 and 7 of EP850, both apparatus claims. These are the only claims said to be independently valid.

5.

Thus, the bulk of this judgment is concerned with EP850, on which the issues I have to decide are:

i)

Whether claim 1 of EP850 is anticipated by or obvious over the single piece of prior art pleaded, which is called Oh. Oh is a Korean patent application published on 5 August 1999, i.e. before the agreed priority date of EP850 of 19 July 2000.

ii)

Whether claim 7 is obvious over Oh.

6.

Resolution of those issues naturally depends on a proper identification of (a) the Skilled Person and (b) his or her Common General Knowledge, both of which were in issue.

7.

Infringement remains formally in issue. SOLiD say that if claims 1 and 7 are construed as they say they should be, no issue of infringement arises. If claims 1 and 7 are construed narrowly, as CommScope contends, at the start of the trial there was a hint that there might be some non-infringement arguments, but none appeared.

8.

The Grounds of Invalidity also contain a fairly standard insufficiency plea, to the effect that the patent is no more enabling than Oh. To the extent necessary, the squeeze seems to have had its effect so there is no need to say anything more about insufficiency.

9.

As to Oh, there are two disputes. The first and principal dispute is whether the disclosure is limited to a multi-channel arrangement. The second and related dispute is whether the disclosure in Oh would be read by the Skilled Person as being designed for and limited to a specific telecoms standard that had been rolled out in South Korea around the time the document was filed and published. The standard is question was TTA-62, a Korean CDMA standard based on IS-95 (also known as ‘CDMAOne’).

10.

Subject to those disputes, there are a number of issues as to the proper construction of various terms used in claim 1, some of which also appear in claim 7. At least some of the construction issues in turn depend on what was CGK. As is usual in a judgment in this type of case there is plenty of terminology. Documents in the case used both ‘analog’ and ‘analogue’. Generally I have used analog, but the two terms bear identical meanings. At the outset I define ADC as ‘analog-to-digital converter’ and DAC as ‘digital-to-analog converter’.

The Expert Witnesses

11.

CommScope called Dr Anthony Acampora as its expert witness and SOLiD called Professor Alwyn Seeds. Between them the experts served a total of seven reports. The first report from each expert set out their views in the usual way, from which it became apparent that there were some disputes over what constituted CGK. Each expert responded in their second report. However, Dr Acampora’s second report raised a fresh issue as to what was generally known about expansion units, so I gave permission at the PTR for service of a third report from Professor Seeds responding on that issue. Following the PTR, Dr Acampora served a third report giving further evidence on CGK and in relation to obviousness of claim 7, for which no permission had been given, but SOLiD did not resist the introduction of it. Finally, Dr Acampora served a fourth report to deal with certain points on the amendment of EP626 which were raised by the Comptroller.

12.

There was a very marked contrast between the two experts. Dr Acampora was a poor witness. As SOLiD submitted, the technology in this case was not complicated. Dr Acampora plainly had a technical understanding of all the relevant technology. However, I was entirely satisfied from his cross-examination that he had no relevant practical experience in the field of DAS and therefore was not in a position to provide the Court with any meaningful evidence as to what was or was not CGK. In his oral evidence, Dr Acampora frequently gave very long answers which did not address the question put. As SOLiD submitted, he had a number of ‘red lines’ which he would not cross, even when he was unable to articulate any coherent reason for the position he adopted, and he had difficulty with adopting the correct role of an expert witness before the English Court.

13.

In that regard, it was apparent from Dr Acampora’s CV documents that he had given evidence on patent matters in the US many times, and on behalf of CommScope four times. I acknowledge that he gave evidence for SOLiD in one US case. However, in my view, Dr Acampora was one of those expert witnesses who, having significant experience of the US system, had significant difficulty adjusting to the rules applying to the giving of expert evidence in this Court. Overall, I concluded that Dr Acampora saw his role as an advocate for CommScope’s case. Despite all that, Dr Acampora did make certain important concessions.

14.

The contrast with Professor Seeds could hardly have been greater. Professor Seeds was working in the DAS field around the priority date. Despite the fact that he was himself highly qualified, experienced and inventive, I am satisfied that he took care to ensure that he considered the issues in this case from the perspective of the man of ordinary skill in the field who was uninventive i.e. the people who are actually going to do the legwork to create the products, as he put it, not research leaders. It was evident that his teaching of students and employing people in the DAS field assisted him to do that. In his oral evidence, he was careful, knowledgeable and cooperative, giving his evidence in an impartial fashion. In short, I found him to be a model expert witness. Wherever there was a conflict, I have preferred the evidence of Professor Seeds.

15.

Notwithstanding my criticism of Dr Acampora, the experts were able to educate the Court as to the CGK and characteristics of the Skilled Person appropriately and I am grateful for their assistance.

The skilled person

16.

On the basis of the written evidence, there did not appear to be much between the experts’ characterisation of the Skilled Person for EP850. Professor Seeds said the Skilled Person required knowledge of three areas of technology namely: RF communications, opto-electronics and digital signal processing (‘DSP’) and this is confirmed by my findings as to the CGK. Although that combination of knowledge might be found in one person, it was perhaps more likely that the Skilled Person was in fact a team of electronic engineers working in the field of DAS. Such engineers would have at least a Masters level qualification in electronic engineering plus around 5 years of relevant work experience. One or two members of the team might also have PhDs.

17.

In his written evidence, Dr Acampora appeared to characterise the Skilled Person in a similar way. In Closing, SOLiD submitted that the cross-examination of Dr Acampora revealed that he had an eccentric view of the proper characteristics of the Skilled Person. SOLiD relied on the following four answers:

11 A. I believe that a person would have a familiarity with radio

12 frequency communications, but once again I do not believe that

13 the skilled person would necessarily have the skills to build

14 a radiocommunication system.

25 Q. Is this right: your skilled person, without any outside

2 assistance, would not actually be able to build a working DAS

3 system as described in the patents?

4 A. That is correct.

3 First of all, I do not believe that the skilled person,

4 as I have defined that person to be, would have been involved

5 in anything more than a functional design of a DAS. I do not

6 believe a skilled person would have gotten down into the nuts

7 and bolts, if you will, in such a way as to design an

8 electro-optic interface or, for that matter, to program a

9 digital signal process.

22 So in that role, I believe that the skilled person would

23 be aware of and have a working knowledge of distributed

24 antenna systems but not necessarily be developing distributed

25 antenna systems.

18.

To a degree I consider that SOLiD’s submission requires these quotes to be taken out of the context in which they were made. The context is indicated in the third answer. What Dr Acampora was saying was that the Skilled Person would not build every component from scratch. Where they were available, the Skilled Person would buy off the shelf parts. In that sense, his approach was entirely realistic. However, there remained something in SOLiD’s point. The Skilled Person is involved in more than just ‘functional design’, by which Dr Acampora seemed to envisage an exercise of drawing boxes on a page, allocating functions to them and drawing connections between them. In that sense, these answers seemed to reflect Dr Acampora’s own lack of experience in the DAS field. Overall, I do not consider that SOLiD’s submission, to the extent that I accept it, takes matters beyond the point I have already accepted – that Dr Acampora had no practical experience in the DAS field.

19.

It was apparent that at the priority date DAS was a mature technical field in which a number of businesses competed. Professor Seeds named a selection: Andrew, LGC Wireless, Foxcom, Mikom, Tekmar Sistemi, Decibel Products, Alan Microwave, ADC etc. An industry report exhibited by Professor Seeds forecast turnover of $1bn between 2002 and 2006 in the US alone. Professor Seeds also named a number of significant installations of DAS in high-profile buildings which he said the Skilled Person working in the UK would have been aware of.

20.

Entirely consistent with that, Professor Seeds gave unchallenged evidence that the Skilled Person/Team would have had knowledge and experience in circuit and systems design for wireless communication systems and the electronic components needed to build such systems. Furthermore, as SOLiD submitted, the Skilled Person is operating in the real world and making a real DAS to address an actual RF coverage shadow area which might well be in a set of tall buildings or a building complex.

Technical background

21.

The parties agreed a very useful Technical Primer. Certain aspects of it are no longer relevant due to EP626 dropping out of the action (save for the amendment application). Dr Acampora exhibited a revised version of the Technical Primer to his first report, in which he added more detailed explanation in certain areas. What follows in this section is based on the revised Technical Primer, plus a useful preçis provided by SOLiD, but with some edits of my own to remove aspects which are not relevant to EP850.

Cellular wireless communication

22.

Mobile devices operate in a cellular communication system. In such a system, mobile devices communicate wirelessly with base stations, which in turn are connected to the world-wide telecommunications infrastructure generally by way of wired connections or point to point radio links.

23.

In the forward, or downlink direction, signals generated by a sending device connected to the telecommunications infrastructure arrive at the base station in the form of a sequence of binary digits (bits), are digitally encoded onto radio frequency ("RF") carrier waves and sent from the base station to a receiving mobile device. In the reverse, or uplink direction, digital signals generated by a mobile device are encoded onto radio frequency carrier waves which are conveyed from a transmitting mobile device to a base station and sent onto the infrastructure to the intended recipient (which may or may not be another mobile device in wireless communications with the same or a remote base station).

24.

A base station is typically owned and operated by a wireless service provider using certain wireless RF spectrum purchased or licensed from a government regulatory agency.

25.

Each base station operates over at least one radio cell, which is the area of radio coverage of that base station (for present purposes, it can be assumed that a base station operates over one cell). A base station communicates with mobile devices in its cell through its antenna by transmitting and receiving RF signals.

26.

RF signals weaken with distance as they travel away from their transmitting antenna. A base station’s cell is the area in the immediate vicinity of the base station where the RF signals transmitted by it in the forward direction are still sufficiently strong to be detectable and processable by receiving mobile devices, and vice-versa for the reverse direction.

27.

Within its cell, the base station is tuned to receive all of the RF frequencies in the service provider’s uplink RF spectrum, including whatever noise or mobile uplink signals may be carried on those frequencies. The base station’s antenna has no way of knowing whether the arriving RF signals are noise or data signals, or where the RF signal originates from. The base station antenna simply operates to collect the entire received wireless RF uplink spectrum and any signals which may be carried on any frequencies within that spectrum, and feeds the composite received RF spectrum into a segment of cabling (as an RF electrical signal). At this point, the base station has no way of knowing whether any mobile signals are present.

28.

Between the base station and mobile device in a cell, the service provider generally allows no access to the information signals encoded or carried on the RF spectrum. Thus, whether the RF spectrum carries video or voice information on any particular RF frequency, or no information, is known only to the base station that packages the data and the mobile devices that receive and unpack any data from the RF spectrum.

29.

It is typically (but not always) the case that the downlink signals in a cell occupy a different portion of the electromagnetic radio spectrum than the uplink signals, and that the downlink and uplink spectra are assigned as a matched set.

30.

Where obstructions are present within a cell, such as tall buildings or terrain (hills, valleys, tunnels), there is further weakening of an RF signal between a base station antenna and mobile devices in its cell, creating what are often referred to as radio wave shadow areas. A DAS can be employed to reach mobile devices located in these obscured areas of a cell, such as inside buildings and tunnels, effectively extending the reach of the base station to those obscured areas such that mobile devices can transmit and receive RF signals to and from the base station, by way of the DAS. A DAS can also extend the geographical area covered by a base station.

RF signals

31.

The wireless signal travelling away from the antenna of any cellular system, including a DAS, consists of a modulated radio carrier. A modulated radio carrier is a radio wave that carries information. Information is carried on these radio waves by manipulating their amplitude, frequency and/or phase.

32.

The amplitude is the maximum height of the unmodulated wave (Figure 1a) and the wavelength is the distance between two successive peaks (Figure 1b). Figure 1a shows the wave vs. time at a fixed distance from the antenna: the time between successive peaks is its period, and the frequency of the wave (the rate at which it oscillates from peak to trough to peak) at any one location is the inverse of its period. Frequency is measured in cycles per second, the S.I. unit being Hz. Figure 1b shows the wave versus distance as it propagates away from the antenna at one particular point in time. These waves are sine waves and can be expressed using the sine function (see further below).

33.

Figure 1c shows the phase of a radio wave. In this figure, two time-varying sine waves of the same frequency are displayed but offset from each other by a fraction of a cycle. The offset between the two sine waves is referred to as their phase difference.

34.

A single unmodulated RF signal can therefore be completely characterised by reference to its amplitude, frequency, and phase - no additional information can be inferred from it alone, and such a signal is referred to as being a carrier wave.

35.

Engineers refer to such waves as sine waves. A radio wave may be mathematically represented as a function of time by r(t) = A sin (2πft+ θ), [Equation 1] where A is the amplitude, f is the frequency, t is time, is the phase, and is a universal constant.

Modulation of radio carriers

36.

Radio signals are analogue, in that they are capable of possessing infinite values of frequency, amplitude and phase. However, cellular systems operate digitally, by conveying a series of bits which have a value of 0 and 1. Radio signals can be manipulated to carry digital information by way of modulation. A radio modulator is the component of a radio transmitter that imparts the information to be sent onto a carrier wave (often referred to as being an unmodulated carrier) and in so doing, imparts a range of radio frequencies to the modulated radio signal.

37.

The period during which one symbol is sent is known as a symbol period. The symbol period is important because, unlike an unmodulated radio wave which has a single frequency, a modulated radio wave occupies a band of frequencies known as the bandwidth of the signal (that is, the amount of radio spectrum that the signal occupies). Generally speaking, the bandwidth of a modulated radio wave varies inversely with the symbol period. Bandwidth is measured in units of cycles per second, also known as hertz (Hz).

38.

Figure 2a below shows an unmodulated carrier which has only a single frequency, in contrast to figure 2b where the modulated carrier occupies a band of frequencies:

39.

These modulated radio signals, therefore, are information bearing signals. Generally speaking, the faster the information-bearing signal varies (that is, the smaller the symbol period such as would occur, for example if bits were to be generated by a computer at a faster rate), the greater will be the required frequency range (i.e., the bandwidth) of radio waves needed to deliver the information.

40.

Mathematically, a modulated radio signal can be represented as S(t) = A(t) sin [2πft + θ(t)] [Equation 2]. Note that the amplitude and phase of the sine wave are varying with time in accordance with the underlying information to be transmitted. These temporal variations are what give the modulated radio wave its bandwidth, and the faster the time variations, the greater is the rate at which information is being transmitted, and the greater the modulated RF signal’s bandwidth.

41.

The spectrum of an exemplary information-bearing radio signal appears in Figure 2C. Shown is the signal’s centre frequency, represented by the symbol fo, and the bandwidth of the signal’s spectrum, represented by the symbol B. The centre frequency is often called the carrier frequency.

42.

A radio modulator can convert an information-bearing signal, such as a sequence of bits, into an information-bearing radio signal by simply varying the amplitude of a single-frequency radio wave on a bit-by-bit basis, with a logical “1” corresponding to a first amplitude, and a logical “0” corresponding to a second amplitude. This is amplitude modulation and is shown in Figure 3a below.

43.

It is also possible to modulate the phase of a radio wave. Figure 3b shows an exemplary phase modulator. In this figure a logical “1” produces no phase shift, while a logical “0” produces a 180° phase shift. Amplitude and phase modulation can also be combined to achieve faster data rates as shown in Figure 3c.

44.

Defining multiple amplitude and multiple phase levels enables higher data rates to be achieved using the same bandwidth since each symbol carries more information. For example, in Figures 3a and 3b, only a single bit is sent per symbol, whereas in Figure 3c, two bits are sent per symbol1. Accordingly, the spectrum utilization efficiency in the modulation scheme in Figure 3c is twice that of Figures 3a and 3b.

45.

Data that has been modulated onto a radio wave carrier can be extracted (demodulated) by a receiver, provided the receiver knows what scheme was used to modulate the signals in the first place.

Frequency shifting

46.

The RF spectrum occupied by a modulated RF carrier can be shifted from its original radio frequency to a new location along the frequency spectrum by simply mixing (multiplying) the modulated carrier signal with any single frequency (unmodulated) carrier signal. This process, discussed in more detail in the following section, of repositioning a modulated signal is sometimes referred to as “frequency shifting,” “frequency translation” or “down conversion” or “up conversion” (depending on whether the shift moves the band down or up along the frequency spectrum) and does not change the shape of the modulated spectrum. For example, a modulated radio signal which varies from 998 to 1002 MHz (that is, having a bandwidth of 4 MHz) centred at 1000 MHz could be mixed with a 995 MHz signal, in order to shift the frequency down and result in modulated radio signal which now varies between 3 and 7 MHz (and which still occupies a bandwidth of 4 MHz but is now centred at 5 MHz). This is referred to as an intermediate frequency ("IF") signal.

47.

Figure 3c is provided as a simple example. In reality modern cellular systems use far more complex modulation schemes which can send, for example, 6 or 8 bits per symbol.

48.

Frequency shifting does not change how the original signal was modulated. The frequency shifted signal still contains all the same amplitude and phase modulation characteristics of the original signal and so conveys exactly the same information. For completeness it is also possible to downshift not just to an intermediate frequency, but to baseband, meaning that the centre of the modulated spectrum is at 0 hertz.

Modulation and frequency shifting in more detail

49.

In Equation 2 above, the modulated radio signal can be decomposed into two components using the following trigonometric identity:

sin (x + y) = sin (x) cos (y) + cos (x) sin (y) [Equation 3]

50.

Applying Equation (3) to Equation (2),

S(t) = A(t) sin [2πft + θ(t)] = A(t)cosθ(t)sin(2πft) + A(t)sinθ(t)cos(2πft)

S(t) = A(t)cosθ(t)sin(2πft) + A(t)sinθ(t)sin(2πft +90°) [Equation 4]

51.

Thus, the modulated RF signal can be considered to be two amplitude modulated RF carriers, offset by 90°. These amplitude components are typically called the baseband in-phase and quadrature components:

I(t) = A(t)cosθ(t) [Equation 5]

Q(t) = A(t)sinθ(t) [Equation 6]

52.

Not only does the modulated RF signal occupy spectrum of bandwidth B, but so do the baseband in-phase and quadrature components.

53.

Generally speaking, any time-varying information signal, such as an in-phase or a quadrature signal, can be decomposed into the superposition of different frequency sine waves through a mathematical process known as the Fourier Transform. The details do not matter: suffice it to say that the Fourier transform of a time-varying signal are plots showing the amplitude and phase of the underlying sine waves. These plots reveal the spectrum of a signal, consisting of an amplitude portion and a phase portion as shown in Figure 4A below, which are representative of how the baseband spectrum of the in-phase and quadrature components might appear. Note that the centre frequency of the baseband spectrum is at a frequency of zero (0). The original time-varying signal can be obtained from its spectrum by a process known as the Inverse Fourier Transform.

54.

Creation of a modulated RF signal from its baseband components follows directly from Equations 5 and 6; the in-phase and quadrature components are multiplied by sin(2fot) and sin(2fot +90o), respectively, and the two modulated RF signals produced are simply summed together. By so doing, the baseband components have been up-converted to RF at a carrier frequency of fo.

55.

The unmodulated RF carriers sin(2fot) and sin(2fot +90o) are generated by a Local Oscillator (LO) that generates an unmodulated tone at frequency fo. The two RF signals are generated from the same LO by shifting the phase of the single tone produced by the LO by a phase of 90o. The devices that multiply the baseband signals by the single-tone RF frequency are known as mixers.

56.

The spectrum of the modulated RF spectrum might appear as shown in Figure 4B below, which appears as two frequency-shifted versions of the underlying baseband spectra, shifted from 0 to + f0 and -fo.

57.

It is possible to convert a baseband signal to a modulated RF signal via a two-step process. First, the signal is up-converted from baseband to an Intermediate Frequency (IF). The process for doing this is the same as described above for converting a baseband signal to a modulated RF signal, except that the frequency of the LO is lower. The spectrum of the resulting modulated IF signal would be the same as that shown above in Figure 4B, except that the modulated spectrum would be centred at +fI and -fI, where fI is the Intermediate Frequency (IF). Next, using a second mixer and a second LO at a frequency of f = fo - fI, a modulated RF signal at a frequency of fo is produced; this is shown in Figure 4C below:

58.

It is often desirable to perform the RF modulation by means of this two-step process because, from a practical perspective, it is easier and less costly to perform various operations at some IF than at an RF frequency. Examples of these operations might include digitization of baseband signals (see further below), and up conversion to an IF via well-known digital signal processing techniques (see also further below). Digital filtering is another possible advantage.

59.

Recovering a baseband information-bearing signal from a modulated RF signal involves mixing (i.e. multiplying) the modulated RF signal by a single-frequency signal generated by a Local Oscillator. Although this can be done in a single step, such down-conversion is often performed by a two-step process. First, the modulated RF signal is down-converted to an IF, and then it is further down-converted to baseband I and Q components. Again, for practical reasons not dissimilar from those already mentioned (digital processing and filtering at an intermediate frequency), a two-step down-conversion process is often used.

60.

The two step down-conversion process begins with Figure 4D below:

61.

As shown, an incoming RF modulated signal is mixed with an LO at a frequency such that the desired intermediate frequency signal is produced (namely, the LO frequency should be chosen to be f = fo - fI such that an IF of fI results). The filter shown in Figure 4D removes a signal at a so-called double frequency term at approximately 2 x fo which is mathematically produced by the multiplication process.

62.

The second conversion from IF to baseband is shown in Figure 4E. Here two mixers are used to recover the I and Q baseband components (note that Figure 4E has been drawn so as to illustrate that the down conversion to baseband could be done either directly from RF or indirectly from IF). Mathematically, Figure 4E is the reverse of Equations 4-6 above. Using techniques such as discussed in this paragraph, it is also possible to down-convert an RF signal to a lower IF frequency (rather than to baseband), by a two-step down-conversion process.

Sampling theory

63.

Any analogue waveform that is limited in bandwidth can be completely represented by analogue samples of that waveform if the rate of sampling is sufficiently high. That is, from the sampled values alone (each of which is simply a number), the original waveform can be completely reconstructed. Accordingly, any information that is carried by such a bandlimited waveform can be completely recovered from the set of samples.

64.

The minimum rate at which waveforms can be sampled, such that the samples are adequate to recover the entire signal, is twice the highest frequency of the signal being sampled (referred to as the Nyquist–Shannon sampling theorem). If the sampling rate is less than this, then upon reconstruction, an effect known as aliasing will occur, which will result in distortion when recreating the original analogue waveform from the samples.

65.

Prior to sampling, high frequency signals are first typically frequency shifted to some lower intermediate frequency (or to baseband), as explained above. This makes the signal easier to process and sample, resulting in a more accurate digital representation.

66.

Furthermore, if the sampling rate R is greater than 2 x B, then the original analogue waveform can be readily recreated by passing the samples through a filter having a passband that: (a) leaves all spectral components within the range (0, B) unchanged; and (b) falls to zero for all frequencies greater than R-B. Such a filter is shown in Figure 4F.

Digital representation of samples

67.

The sampling theorem is the basis for transmission of analogue information (such as human speech and video signals) over digital transmission facilities, and for the digital processing of such signals. Upon sampling, each analogue value is converted into a digital representation by a device known as an analogue-to-digital (A/D) converter.

68.

Digital information is conveyed using bits (logical “zeros” and “ones”). A digital word is a way to represent a sequence of bits. For example, a 4-bit word can have any of the integer values between 0 (0000) and 15 (1111). The left-hand bit in a word is referred to as the most significant bit (MSB), and the right-hand bit the least significant bit (LSB).

69.

An A/D converter produces such a digital word for each sample it generates, and outputs it in a parallel format (meaning that all of the bits of the digital word are produced simultaneously, and each bit of the word appears on its own output. For example, if it is a 4-bit A/D converter, each bit in the 4-bit word generated by the A/D converter will appear on its own output. This is often referred to as a word-parallel format. In many situations, a serial bitstream is required. A parallel to serial converter is used to convert a sequence of word-parallel digital words into a serial bitstream (in this format, the digital word is often said to be in bit-serial format).

70.

The process of digitising an analogue signal can be illustrated generally as shown in Figure 5 below:

71.

Figure 5 shows a very simple example of digitising an analogue signal into a string of 3-bit words. In reality, in the context of modern cellular systems a far greater number of bits would be required.

72.

The input signal is sampled according to the sampling rate, shown in the figure above as the vertical red lines indicating the regular timepoints at which samples are taken. Each measurement is assigned a discrete value, in the figure above the value can be any integer from 0 to 7 inclusive. If the measurement falls between an integer value, it is rounded (for example in the figure above, the second measurement is 5.5 and is rounded to 5). The end result of digitisation is that a continuously varying analogue waveform, as shown above, has been converted into a series of discrete values that can be represented by a string of 3 bit digital words3: 011 101 110 101 011 001 000 001. A digital-to-analogue converter (DAC) can subsequently convert these numbers (digital words) back to analogue values and then undertake further processing, specifically filtering as discussed in paragraph 66 above to recover an analogue waveform. A DAC is used to reconstruct a set of analogue samples, which can be further processed to reconstruct an analogue waveform.

73.

As another example, an eight-bit ADC may operate as follows. If the analogue value sampled falls anywhere between zero and one, the ADC produces the digital sample 00000000. If the analogue sample is anywhere between one and two, the ADC produces the digital sample 00000001. If the analogue sample is anywhere between two and three, the ADC produces the digital sample 00000010. Continuing in this manner, ultimately if the analogue sample is anywhere between 254 and 255, the ADC produces the digital sample 11111110. Finally, if the analogue sample is greater than 255, the ADC produces the digital sample 11111111.

74.

Continuing this example a little bit further, if a continuously time-varying analogue waveform with bandwidth B is sampled at a rate R>(2 x B), and each of the sequence of analogue samples is converted into an eight bit digital word, then only the sequence of eight-bit words are needed to reconstruct the original analogue waveform. To do so, each eight-bit digital word is converted by a DAC back to an analogue value, with the sequence of analogue values so regenerated further processed by a filter of bandwidth having characteristics as described in paragraph 66 above.

75.

In modern communication systems, there are numerous advantages associated with the digital representation of analogue waveforms. However, the process of digitisation always introduces some distortion to the recovered analogue signal, that is, A/D and D/A conversion introduces quantisation noise. This results from the fact that the original analogue samples may each have assumed any of a continuum of values, but upon digitisation, the analogue values can only be mapped to the closest discrete number value. For any given application, the quantisation noise may be reduced by representing each analogue sample by a greater number of bits, also referred to as a higher bit resolution. In one of the above examples, there are eight possible values that each sample can be assigned (0 to 7) which can be represented in binary using three bits (23 = 8), and so would be referred to as a 3-bit resolution. Quantisation noise could be reduced by instead using a 4-bit resolution (permitting 24 = 16 values) or even 8-bit resolution (28 = 256 values) for the same range of analogue signal. However, this comes at the cost of requiring components which can operate at faster rates since more bits would then need to be processed per unit of time.

76.

The bit rate of the digital representation of an analogue signal is the number of bits transmitted in a second, which is the product of the sampling rate and the number of bits of resolution per sample.

Processing of digital samples

77.

The benefits of sampling an analogue waveform and converting each sample into a digital representation are three-fold. First, special purpose processors known as Digital Signal Processors (DSPs) can operate on the digital samples of a band-limited waveform and produce a new set of digital samples representative of a second band-limited analogue waveform identical to those that would have been produced had the second waveform been generated from the first through analogue processing (such as filtering, up-conversion, and down-conversion). Generally speaking, it is far simpler, less costly, and less power-consuming to digitally process digital representations of analogue signals as opposed to using analogue signal processing.

78.

Second, once a band-limited analogue waveform has been sampled and converted into a digital representation, the samples can be reliably transported to some remote location over digital transmission and switching facilities; analogue transmission and switching costs more and produces greater distortion as compared with digital transmission and switching. At that remote location, the analogue signal can then be recovered from its digital samples.

79.

Finally, the digital representation of an analogue signal can be time-multiplexed with the digital representation of other analogue signals, which is advantageous when transporting multiple signals between two locations.

80.

Note that digitization of an information-bearing signal intended to be communicated over a radio system can be performed at baseband or at some intermediate frequency. In this latter regard, the sampling rate is effectively at least twice the highest frequency contained in the intermediate frequency signal, that is, the effective sampling rate must be at least twice the sum of (1) the IF and (2) the baseband bandwidth. As before, each sample is represented as a multi-bit digital word. From these digital samples, a complete analogue IF information-bearing signal can be created by converting each digital sample back into an analogue number via a DAC and filtering, as before, to obtain a continuous IF signal. Although digitization of an RF signal is (in principle) possible, it would be unusual to do so in a practical system due to the high sampling rates that would be involved, and the high bit rates of the digital representations produced by the A/D process.

81.

Since the digital samples of a modulated IF waveform contain all of the information from which the underlying analogue IF waveform can be recreated, such digital IF can be processed with a DSP. Such processing might include digitally combining two or more IF signals of different frequencies, digital separation of two or more IF signals of different frequencies, and digital filtering of IF signals.

82.

A simple type of digital signal processing that might be performed on two or more signals is digital addition or digital summing; this can be done on the digital representation of baseband signals or on the digital representation of IF signals. An example of digital summing is illustrated in the following drawing, and the accompanying arithmetic is discussed in the following paragraph.

83.

Assume the sum of two analogue signals is desired. This might be approached in at least two ways. As a first example, as shown on the left side of the drawing, the analogue signals can be combined in the analogue domain. Alternatively, addition of the two analogue signals can be done by sampling and digitizing each of the analogue signals and mathematically adding the digital representations of the samples, as shown in the middle of the drawing. After the digitized samples have been digitally summed, the summed digital samples can be converted back into an analogue waveform by converting the digital sums back into analogue samples and then filtering the samples. This is shown on the right side of the drawing. Note that the points labelled in figures of the signals are for illustrative purposes to help compare the result of digital summation versus analogue summation. In reality, the combining in the analogue domain would typically occur continuously along the entire signal, not just at the sampled instant that happens to be chosen in the alternate digitization process, and the operation would be subject to various types of distortion.

84.

Numerically, the analogue values at the three consecutive sample times shown in the drawing above of a first signal are 139.6, 218.8, and 50.2. Assuming that the discrete output values of the analogue to digital converter used to digitize the analogue signals is limited to only integer values, these three samples of the analogue signal can be represented by the digital values 10001011, 11011010, and 00110010, respectively, which correspond to 139, 218, and 50. Suppose that the corresponding three values of a second analogue signal are 125.7, 62.5, and 3.3; these can be represented by the digital values 01111101, 00111110 , and 00000011, respectively, which correspond to 125, 62, and 3. If the two digital signals are to be added, then the first sample of the first signal would be mathematically added with the first sample of the second signal (139 + 125 = 264), then the second sample of the first signal would be mathematically added with the second sample of the second signal (218 + 62 = 280), and finally the third sample of the first signal would be mathematically added with the third sample of the second signal (50 + 3 = 53), producing the nine bit digital numbers 100001000, 100011000, and 000110101, which are the binary digital values for 264, 280, and 53. After digital-to-analogue conversion, including filtering of the digitally summed samples, an analogue signal representing the sum of the original two analogue signals has been provided, but it is not perfect due to quantisation noise as explained above in paragraph 75.

85.

Note that by means of a digital signal processor, two or more digital IF signals of different IF frequencies and with non-overlapping spectra can be combined into a single digital signal, provided that the effective sampling rate of each signal is at least twice the highest IF frequency, plus the bandwidth, of that signal. As before, the digital representation of such a composite signal contains all the information needed to recreate an analogue representation of that composite. Furthermore, digital representations of such composite signals can be digitally summed.

86.

A composite analogue signal consisting of the sum of two or more IF signals of different intermediate frequencies with non-overlapping spectra can readily be separated into the underlying individual components by means of analogue filters (each passing only the signal at a particular intermediate frequency).

87.

Accordingly, it follows that the digital representation of a composite IF signal can be separated into digital representations of each underlying component via digital filtering, a type of digital signal processing. Furthermore, digital representations of composite IF signals can be summed together, and the sum of these components can be separated into the individual sums of the underlying components.

Distributed antenna systems

88.

A DAS is a network containing a host or master unit and remote antenna units (“RAUs” or “remote units” or "slave units" (in old-fashioned terminology)). The host unit communicates (typically via cables) with a base station. Typically, a remote unit is located within an area obscured as described in paragraph 30 above, or is used otherwise to extend the range of coverage of a base station. Such systems are often deployed and operated by a service provider seeking to expand its wireless service, which operates on its licensed wireless RF spectrum, to obscured or radio wave shadow areas – e.g. inside a building, or over some wider area than that normally covered by a base station’s cell. The DAS operates to extend the functionality of the base station to the remote units. The base station and host unit are typically connected by way of wired links, and the host unit is connected to its remote units by way of wired links. An exemplary DAS configuration is shown in figure 6 below:

89.

The roles of the host unit are (i) in the forward or downlink direction, to receive modulated RF signals from the base station (or digital signals containing the modulated RF amplitude and phase information) and deliver these to the remote units, in a form such that the information-bearing amplitude and phase variations are preserved, via some communications medium such as optical fibre or coaxial cable; and (ii) in the reverse or uplink direction, in a similar manner, to receive modulated RF signals (or digital signals containing the modulated RF amplitude and phase information) sent by the remote units and convey these to the base station.

90.

In the downlink direction, each remote unit receives representations of RF signals sent from the host unit and then converts these into an RF format appropriate for transmission of RF signals from its antenna, such that the transmitted RF signals can be received by mobile devices in range of the transmission.

91.

In the uplink direction, each remote unit receives RF signals from mobile devices that are transmitting in the reception area of that remote unit and communicates those RF signals or a representation thereof back to host unit.

92.

As a typical DAS has multiple remote units and given that each remote unit may transmit and receive radio signals on the same radio frequency, in the upstream direction, it may be necessary for the host unit to aggregate radio signals on the same frequency band received from multiple remote units into a single modulated RF signal for delivery to the base station.

93.

In some implementations of DAS (analogue DAS), the modulated RF analogue signal sent from the base station to the host unit is transmitted through to the remote units all in the analogue domain (and the same in the upstream direction). However, analogue signals suffer from weakening and distortion of the signal over distance to a much greater extent than digital signals. Therefore, in other implementations of DAS (digital DAS), the modulated RF analogue signal received by the host unit from the base station is digitised before it is transmitted to the remote units, where the digitised samples are then converted to a modulated RF analogue signal at the remote units. This involves the steps described above and summarised schematically below:

94.

Similar procedures (not shown) are performed in the uplink direction (along with the aforementioned aggregation at the host unit of signals arriving from the different remote units), with the result that a modulated RF signal is delivered by the host unit to the base station.

95.

Although radio signals in a cellular communication system are already modulated by the cellular communication system to carry information-bearing digital signals, a digital DAS does not demodulate those radio signals to obtain the underlying information-bearing digital signals. Rather, a digital DAS samples and digitises the modulated RF analogue signal, or more commonly the down-converted modulated IF or baseband analogue signal, to form a new digital signal that is representative of the modulated RF analogue signal received by the host unit from the base station. This new digital signal is then propagated across the digital DAS to its remote units, where it is converted from digital to analogue format and up-converted to a radio frequency carrier to recreate the modulated RF analogue signal for transmission by each remote antenna to the mobile devices.

96.

Similarly, in the reverse direction, modulated RF analogue signals arriving from the mobile devices at each remote unit (or more commonly their IF or baseband representations) are sampled and digitised (again, without demodulating to obtain the underlying information-bearing digital signals) for transmission to the host unit, where the digitised samples arriving from the several remote stations are aggregated together and converted from digital to analogue format for delivery as a modulated RF analogue signal to the base station.

Common General Knowledge

97.

Initially it appeared that everything in the Technical Primer was agreed to be CGK. However, in his third report, Dr Acampora indicated that what I have set out in the final part in paragraphs 93 to 96 above was not CGK and represented the post-priority position. In cross-examination he accepted the first two sentences of paragraph 93 were CGK, but he drew the line after that. In effect he contended that digital DAS was not CGK.

98.

Professor Seeds’ evidence was that those in the field were working on digital DAS and that digital DAS had been demonstrated and deployed. As Professor Seeds said, its deployment was restricted to installations where the maximum transmission distance was long (>10km) because more complex hardware was required which cost more and consumed more power, but the concept was clearly established.

99.

Professor Seeds also said that the Skilled Person would have been aware of progress and advances in technology which would continue to make digital DAS more attractive. First, that the performance, cost and power consumption for ADCs was improving with advances in silicon integrated circuit technology. Second, that DSP was becoming more attractive for deployment, again due to those advances. Third, that for optical data communications links, costs were falling and bit rates increasing. None of what I have summarised in the last two paragraphs was really disputed by Dr Acampora and I accept Professor Seeds’ view that the concept of digital DAS was CGK and those in the field were working on digital DAS but awaiting the availability of componentry at reasonable cost before the advantages of digital DAS could be more fully realised.

CGK points in dispute

100.

There are a number of disputes I have to resolve as to what was CGK. The parties identified the following three:

i)

The Skilled Person’s knowledge of network architectures in DAS systems, and of tree and multiple-star cabling architectures in telecommunications generally;

ii)

Whether a particular DAS called the LGCell was CGK;

iii)

Whether various techniques for handling overflow were CGK.

101.

All three of these points were effectively about expansion units. On that, CommScope were keen to stress that we are concerned with CGK in the UK: Generics v Warner-Lambert [2015] EWHC 2548, [123]-[124], Arnold J., a point I entirely accept.

102.

A fourth dispute emerged in the course of closing submissions, even though not identified as such. I address this below. In addition, there was a minor dispute over terminology. Dr Acampora said that a DAS inherently has a point to multi-point architecture, whereas a remote antenna is point to point. As SOLiD submitted, nothing turns on this.

Network architectures

103.

It was surprising that there was any debate about this, but there was. In general terms, there are two basic configurations for a cabled network. Like Professor Seeds, I take a VHF area configuration by way of example. The first is a simple single star arrangement (1) where a base station has multiple connections to multiple antenna units (AUs). The second is a cascade or daisy-chain arrangement (2) where the base station is connected to one AU which is then connected to a second AU and so on. At least some if not most configurations in practice comprise combinations or extensions of those two basic configurations i.e. some form of cascaded star or tree architectures (3). The precise configuration is dictated by the specific application. However, all configurations are designed to minimise the total transmission path length, to reduce signal losses in the transmission medium and to reduce implementation costs.

104.

In his second report, Professor Seeds produced a number of specific illustrations of network architectures. In response, whilst he disputed that the ANSI TIA-568 Ethernet cabling standard, the latest version of which was introduced in 1995 – ANSI-TIA-568-A - was CGK, Dr Acampora accepted that a ‘double star’ arrangement, as illustrated in this figure from the standard, was CGK:

105.

Another illustration, of a ‘traditional passive DAS’, provided by Professor Seeds was this:

106.

In this arrangement, in the downlink the base station signal enters coaxial splitter SPT0 which divides the signal in star topology between building floors. On each floor there is a further coaxial splitter (e.g. SPT1) which splits the signal into a further star arrangement, feeding the antennas. In the uplink, the signals from the antennas are first combined in SPT1-SPT3, the combined signals from each floor then pass to SPT0 where they are combined and fed to the base station. In large coaxial DAS, bidirectional amplifiers were added to compensate for cable and splitting/combining losses.

107.

Although this illustration was taken from a 2015 publication, Professor Seeds was clear that this type of coaxial cable DAS had been deployed since the 1950s (for example in mines). This diagram was just an example of how a traditional passive DAS could be constructed. Professor Seeds gave evidence that coaxial DAS of this type were in the product line at the priority date of major telecoms manufacturers including Aerocomm, Andrew, Huber & Suhner, Kaval and RFS.

108.

Dr Acampora noted that all the illustrations produced by Professor Seeds were ‘double star’ and said it was not appropriate to generalize from this double-star arrangement, which he said was a conventional two-level branching tree topology, to a general tree topology. In making this point, Dr Acampora might have been drawing a distinction between DAS and other networks. Either way, in my view, this was Dr Acampora in ‘prove it’ mode, a stance which was inconsistent with the role of an impartial, objective and independent expert witness.

109.

As Professor Seeds said, the skilled person would have learnt about network architectures as an undergraduate, including the basic principles I have labelled (1)-(3) above. I have no doubt that tree or multiple star network architectures were CGK, for a number of reasons including the following.

110.

First, it is apparent that the illustrations which Professor Seeds managed to find are just that, illustrations of the principle showing how splitters and combiners were used to distribute and gather the signals. In real life applications, the Skilled Person would have been accustomed to designing his DAS to suit the application. By way of a very simple example, a large office installation may well have required two or more ‘wings’ on each floor to give appropriate coverage, so that SPT1 would feed SPT1a, SPT1b, SPT1c each feeding three or four antennas in each wing.

111.

Second, because Professor Seeds gave convincing evidence that coaxial DAS in a tree or multiple star architecture had been extensively deployed for many years by July 2000. I recognise that it can be very difficult to locate actual wiring diagrams of particular installations showing a tree network, but that does not mean they did not exist. This is precisely the sort of low-level CGK issue where the Court depends on the evidence of properly qualified expert witnesses who were in the field at the relevant date. Professor Seeds was such an expert and Dr Acampora was not.

112.

Professor Seeds said there were numerous cascaded analog systems in operation by July 2000, including those for VHF radio coverage in tunnels on parts of the London Underground.

113.

Third, in any event, there is plainly no difference in principle between a double star arrangement and a tree arrangement, and Dr Acampora did not identify any. Indeed, I am driven to the conclusion that all of the debate about network architectures was the result of CommScope and Dr Acampora needing to dispute that so-called expansion units were CGK.

The LGCell DAS product

114.

As I have already indicated, the issue was really as to whether the notion of an expansion unit was CGK. Dr Acampora said he was not aware of any expansion units as mentioned by Professor Seeds and was of the view that they were not CGK. Professor Seeds drew attention to the LGCell system as a high-profile example of a system which incorporated expansion units. Indeed, his evidence was to the effect that the LGCell system itself was so well-known in the art by the priority date that it was CGK. Again, Dr Acampora said he was not aware of the LGCell system and did not think it was CGK.

115.

Professor Seeds gave clear and convincing evidence that the Skilled Person would have been aware of the LGCell DAS product, marketed by LGC Wireless. The company was founded in 1996 and launched its primary product, the LGCell system, in February 1997. By June 1999 it had supplied some 400 systems, including in some high-profile and large-scale deployments which had been publicised, including the San Jose Tech Museum in California, Petronas Tower in Malaysia and the Seattle Baseball Stadium. The company was a frequent attendee (and I infer, exhibitor) at wireless industry trade shows, of the type which the Skilled Person or someone in his company would have attended. It had partnerships with telecoms operators and distributors such as Telefonica, Vodafone and Phillips Electronics and by the priority date was one of the market leaders in DAS. A 2001 review of the DAS market in 2000 placed LGC Wireless as the second largest DAS company in the world, with a 14% market share of the in-building and public access market, of which Professor Seeds said the great majority would have been the LGCell product. Although LGC Wireless was a US business, I was satisfied from Professor Seeds’ evidence that the DAS field was an international one. Any Skilled Person in the UK had to have been aware of significant developments in the US in particular since Professor Seeds identified it as by far the largest market in the world at that time.

116.

From Professor Seeds’ evidence, I am satisfied that the LGCell system was CGK (in the UK). The fact that Dr Acampora had not encountered either the LGCell system nor, apparently, any expansion unit is a reflection of the fact, in my view, that he was not sufficiently in the relevant art or in touch with practical developments in this field around the Priority Date.

117.

Professor Seeds also gave evidence that, even if the Skilled Person was not aware of the LGCell product by the priority date, s/he would necessarily have come across it when embarking on a DAS project at that time. The conduct of any such project would have begun with a review of the products already on the market which would have readily identified the LGCell product as a leading product from a leading manufacturer. Professor Seeds gave unchallenged evidence that he conducted these types of competitor product reviews as part of development work at his company Zinwave at around the priority date. He exhibited various documents obtained from the Wayback Machine as the sort of materials which would have been located by the Skilled Person. CommScope sought to characterise these materials as ‘self-serving’, but that misses the point – they evidence the marketing of the LGCell product. CommScope also drew attention to the fact that LGC Wireless only appointed a UK distributor in November 1999 and submitted there was no evidence it did anything in the UK before the priority date. Various other documents put to Professor Seeds in cross-examination did not dent his clear evidence that those working the DAS field would have known of LGCell. As he put it: ‘LGCell had this very strong growth profile which was, you know, eating some of the lunch of the incumbent players.’

118.

The LGCell product was a DAS which used in-building multimode optical fibre to form a DAS system. The LGCell system comprised a ‘Main Hub’ unit, which interfaced with the base station. Connected to the Main Hub by multimode optical fibre, were several ‘Expansion Hubs’ which connected to up to four Antenna Units. Professor Seeds explained the advantages of the system: the overall cost of coverage using the LGCell system was substantially reduced relative to single mode optical fibre DAS systems through the use of (a) the Expansion Hubs and (b) the use of multimode optical fibre. This was why it took a lot of business.

119.

Considering the question of whether ‘expansion units’ generally were CGK, again I have no doubt that they were. In this regard, CommScope made two main points: first, they submitted that SOLiD’s case that expansion units were CGK rested exclusively on the LGCell product, but that submission is wrong: Professor Seeds provided plenty of support, LGCell aside, for the proposition that expansion units were standard and well-known.

120.

CommScope’s second point was based on three documents which were put to Professor Seeds in cross-examination. Their point was that if expansion units were CGK, then they would have been mentioned in each or at least one of these documents. There were at least two important premises underlying this questioning: the first was that if expansion units were CGK they would be used everywhere or at least in the major systems identified; and the second was that it would have been relevant for each of these publications to mention expansion units. As will be seen, a common link between all three documents was an Italian company called Tekmar Sistemi Srl, whose products were used with single mode optical fibre.

121.

First, CommScope relied on some extracts from a textbook Radio over Fiber Technologies for Mobile Communications Networks, published in 2002, put to Professor Seeds in cross-examination. He acknowledged it was written by serious researchers in both academia and industry. CommScope point to references to real-life systems and installations and that the book contains some DAS topologies. It is correct there is no mention of an expansion unit. CommScope submit it is simply impossible that expansion units were CGK in DAS but not included in this book.

122.

The two chapters exhibited were first, Ch 5, written by David Wake, which was concerned with Radio over Fibre (RoF) for in-building coverage, and he acknowledged the assistance of Dr Schuh of Telia and Andrea Casini of Tekmar. His chapter contained a schematic of a Tekmar system in a 6-storey building. The second, chapter 7, was written by Alan Powell. Professor Seeds had worked with David Wake and also knew Alan Powell.

123.

The preface summarised Chapter 7 as explaining the historical aspects of RoF and mentioned two installations by Tekmar for the 2000 Sydney Olympics and the Bluewater Shopping Centre in the UK, which Professor Seeds described as landmark installations, and the Bluewater installation as ‘well-known’ in the UK. Professor Seeds already knew (from colleagues) that the Bluewater architecture was a Tekmar system – a large single star system.

124.

CommScope’s point was that, when installing a DAS for Bluewater or the Sydney Olympics, using expansion units would have been very useful. Professor Seeds disagreed. The special consideration for the Olympic Park was that the incumbent operator in Sydney owned both the fixed line telecoms network, the phone system and the mobile network. For that reason the fibres to connect the antenna units (quite well spread out, fixed to lampposts) were just an internal transaction for that provider. He contrasted that special situation with a situation more representative of the position in the UK where the telecoms operator is a competitor of the owner of the fixed lines and has to lease the fixed lines at whatever rate they can negotiate. That is a scenario where reducing line lengths by the use of expansion units comes in as ‘quite economically viable’. In short, there was a particular reason why expansion units were not used in that application.

125.

Professor Seeds also explained why expansion units were not used in the Bluewater system (which the Wake chapter identifies as comprising 41 antennas and over 10km of single mode fibre). In short, the space is quite open with line of sight over quite long distances with no or few walls to block signals. The antenna units can be well-spaced out, and he envisaged a single star arrangement might well have been suitable (he had not examined the Bluewater system, but had heard of it from colleagues). He contrasted that application with a normal office environment where you often have quite a lot of partitioning with a greater need for expansion units and a larger number of antenna units. As he said, what would actually be specified would depend on an analysis of the space to be covered.

126.

Then Professor Seeds explained Alan Powell’s particular approach in Chapter 7. Alan Powell worked for Decibel Products which was owned by Alan Microwave, as was Tekmar (a single mode fibre DAS company) and Mikom i.e. he named the companies with which he was associated. Although he mentioned ‘Other companies’ who used multimode fibres, he did not name them, and Professor Seeds said that if you wanted multimode fibre, then the companies indicated would have been ADC and LGC Wireless. The clear implication was that Tekmar took a particular single-fibre approach not compatible with the use of expansion units.

127.

The second document put to Professor Seeds was a paper by Dr Schuh published in 1999, entitled ‘Hybrid Fibre Radio Access: A Network Operators Approach and Requirements’ (Ralf Schuh, David Wake and two other authors). The paper reported on an investigation into the use of Hybrid Fibre Radio for the transport of multiple radio signals to some distributed Remote Antenna Units. Although Counsel suggested there was no sign of any expansion units in the paper, Professor Seeds was not sure because he thought the functionality of splitting and combining was indicated in a three-way split shown in figure 3. He said the physical embodiment of an expansion hub would depend on the technology in use, analog or digital.

128.

The third document was a patent owned by Tekmar Sistemi Srl, on which the co-inventors were Andrea Casini and Pier Faccin, again known to Professor Seeds. One of the aims of their invention was to minimise the amount of optic fibre used in a DAS system. Professor Seeds described their proposed network as a tap backbone architecture. He said it was not daisy chain (in which the signal would be demodulated before and modulated after each node) but instead an architecture in which a small amount of optical power is extracted at each node to drive the attached antenna. Professor Seeds did not find the lack of any mention of expansion units in this Patent to be at all surprising. The patent was addressing the problem of trying to cover a wide outdoor area, where the antennae are well spaced out. The serious practical problem which arises if you have one fibre snaking around all the antennae is that you lose your network if someone digs up the fibre. The patent solves this problem by having two fibres in the loop, with the signals travelling in opposite directions in each. In that scenario he did not expect the patent to want to use expansion units because of the sort of coverage problem it was trying to solve. I found his explanation convincing. Furthermore, there was no evidence to the contrary.

129.

Although Dr Acampora exhibited the Schuh paper, it was provided to him by CommScope’s solicitors for the purpose of illustrating what was CGK. He drew attention to the operator requirements listed in it and the taxonomy of wireless technologies and architectures it discusses. He did not exhibit it because of the supposed absence of mention of expansion units. The other two documents appeared in a cross-examination bundle.

130.

In summary therefore, I have no evidence as to who selected the three documents put to Professor Seeds or why, but I have no doubt they were selected precisely because CommScope’s lawyers thought they did not mention expansion units. That does not prove that expansion units were not CGK or dent Professor Seeds’ evidence that they were. In fact, I conclude that there were particular reasons why expansion units were not mentioned in each of these three documents. It was either because of the specific explanations which Professor Seeds gave and/or because the discussion was at a level of generality which did not descend to the need to mention them. Thus the two premises underlying this line of questioning were wrong. Indeed, it was clear from Professor Seeds’ evidence that expansion units are not used everywhere, but only where the application requires them.

131.

I have dealt with the challenge which CommScope mounted in cross-examination in some detail because it was a big issue in this case. However, I am entirely satisfied from Professor Seeds’ evidence that the Skilled Person would have been familiar with expansion units, which provide a way to expand the signal coverage area of DAS arrangements, whilst reducing the total length of transmission medium required, and that these were widely used in deployed systems by July 2000. The point was simple. As he said, expansion units are attractive where antenna units are to be located far from the host unit, because they save cabling and therefore cost.

132.

On the other side of the coin, beyond his assertion that he had not encountered any expansion units, it was apparent that Dr Acampora was not in a position to dispute Professor Seeds’ evidence, which I have accepted. I also note that his evidence was entirely consistent with what I have found above as to network architectures and the LGCell system.

Techniques for handling overflow

133.

In any system where two or more digital signals are added together, the resultant output is liable to have a larger magnitude than any of the individual inputs. Professor Seeds said this was known as ‘overflow’. Dr Acampora agreed that overflow was a problem which the Skilled Person knew may have to be addressed when using digital summers. Professor Seeds said there were four known ways to address a potential overflow problem.

134.

The initial step was to assess the realistic scale of the problem. If the possibility of overflow which damaged performance was either remote or the damage was unlikely to be significant, the decision might be that no mitigating steps were required. If overflow might have significantly detrimental consequences, then there were the following three possible approaches.

135.

First, to use an adder with an output word length sufficiently greater than the word length of the inputs so that if all inputs were at maximum value, the output value would be within the output word length. Thus overflow at the adder itself is avoided. If there was a word length limit downstream, then a scaling or masking stage could be deployed to obtain the required shorter word length.

136.

Second, a scaling or masking step could be deployed before the adder to diminish the input words so that the output word would be within bounds.

137.

Third, limiting. This accepts that overflow will occur and the adder is designed to output the maximum value in the event of overflow occurring.

138.

Dr Acampora agreed that the second and third approaches were CGK, but he disputed that scaling or masking the summed result to fit within some required word length was CGK. In cross-examination, Dr Acampora initially attempted to reverse the effect of his written evidence, consigning these approaches to the analog domain only. He then appeared to realise he was contradicting himself and reverted to his original position once his written evidence was put to him. As SOLiD submitted, that whole passage of evidence did Dr Acampora no credit at all. The difficulties he experienced were entirely self-inflicted and, in my view, were created solely by him adopting a position to give CommScope an argument, rather than anything based on what was actually CGK.

139.

In summary, I find that the following were CGK: the problem of overflow in DSP and the approaches described by Professor Seeds for addressing the problem.

Digitisation capacity/bandwidth capacity for ADCs

140.

This is the fourth CGK dispute which emerged. What was the realistic bandwidth capacity for an analog-to-digital converter by the priority date?

141.

In his first report, Dr Acampora referred to a 1993 paper by Wala which he said recognised the benefits of digitizing an entire cellular band and that ADCs of sufficient performance were starting to become available. He also pointed out the downsides to such broadband sampling were also understood by the priority date: essentially the sampling rate and processing power required.

142.

In response, Professor Seeds indicated that some care was required. He pointed out that the Wala paper was considering ADCs which could sample bandwidths of 12.5 MHz, whereas by the priority date the typical bandwidth of an entire cellular band was much wider e.g. in 1997 the allocation of frequency bands to the UMTS system presented bandwidths of 140 MHz or 90 MHz. The Skilled Person would have recognised that digitizing the entire UMTS bandwidth was not only unnecessary, but would result in greatly increased power consumption, implementation cost and difficulty in meeting regulatory requirements including radiated noise power spectral density. In the UK 3G spectrum auction ending in April 2000, the successful bidders were allocated channels grouped in contiguous bandwidths from 5 MHz to 15MHz. Digitising those contiguous bandwidths was possible by the priority date but if, as was likely, a DAS was required to handle signals from multiple network operators, the Skilled Person would understand s/he needed to configure a system using multiple ADCs.

143.

It is notable that Dr Acampora did not engage with any of Professor Seeds’ reasoning on this issue, and certainly did not dispute any of it. I found it entirely convincing. With other issues in the case in mind, Dr Acampora’s evidence on this point was an attempt to establish some sort of mindset argument to be applied when the Skilled Person was considering Oh.

THE PATENT

144.

Due to some of CommScope’s arguments as to the advantages of EP850, it is necessary to explain the teaching in some detail. It is entitled ‘A method for point-to-multipoint communication using digital radio frequency transport’

145.

[0001] explains the technical field of the invention:

‘The present invention is related to high capacity mobile communications systems, and more particularly to a point-to-multipoint digital micro-cellular communication system.’

146.

[0002] and part of [00003] sets the scene:

‘[0002] With the widespread use of wireless technologies additional signal coverage is needed in urban as well as suburban areas. One obstacle to providing full coverage in these areas is steel frame buildings. Inside these tall shiny buildings (TSBs), signals transmitted from wireless base stations attenuate dramatically and thus significantly impact the ability to communicate with wireless telephones located in the buildings. In some buildings, very low power ceiling mounted transmitters are mounted in hallways and conference rooms within the building to distribute signals throughout the building. Signals are typically fed from a single point and then split in order to feed the signals to different points in the building.

[0003] In order to provide coverage a single radio frequency (RF) source needs to simultaneously feeds multiple antenna units, each providing coverage to a different part of a building for example. Simultaneous bi-directional RF distribution often involves splitting signals in the forward path (toward the antennas) and combining signals in the reverse path (from the antennas).’

147.

Then the patent discusses what it says are various current solutions and their disadvantages:

i)

First, in the remainder of [0003], distributing the signals directly at RF frequencies using passive splitters and combiners, the big problems being the insertion loss associated with passive devices and with coaxial cable severely limiting the distance over which RF signals can be distributed.

ii)

Second, in [0004], taking an RF signal from a base station and down converting to a lower frequency and distributing it via Cat 5 (LAN) or coaxial cable wiring to remote antenna units, with up conversion at each remote unit. Whilst down-conversion reduces insertion loss, the signals are still susceptible to noise and limited dynamic range. Further, each path in the distribution network requires individual gain adjustment to compensate for the insertion loss in that path.

iii)

Third, in [0005], using the RF signals from a base station to directly modulate an optical signal which is transported over fibre optic cables as analogue modulated light signals.

148.

[0006] explains that digitization of the RF spectrum prior to transport solves many of these problems and allows much greater distances to be covered, whilst eliminating the path loss compensation problem. It is then stated that this has been strictly a point-to-point architecture, the disadvantage of which is the equipment and cost requirement because each remote antenna unit requires its own host RF to digital interface device. The burden and disadvantage is illustrated by reference to a building having 20 floors, where the requirement would be for 20 host RF units for 20 remote antenna units, 1 per floor and it is suggested that some applications may require more than one remote antenna unit per floor. Then:

‘As a result, there is a need in the art for improved techniques for distributing RF signals in TSBs, which would incorporate the benefits of digital RF transport into a point to multipoint architecture.’

149.

As Professor Seeds correctly noted, this is the core issue which EP850 seeks to address. The proposed solution is introduced in general terms in [0010]-[0012], which outline three embodiments:

i)

The first is a digital RF transport system with one digital host unit (‘DHU’) and at least two digital remote units (‘DRU’). The shared circuitry in the DHU performs bi-directional simultaneous digital RF distribution between the DHU and the at least two DRUs.

ii)

In the second, the digital RF transport system includes a DHU and at least one digital expansion unit (‘DEU’) and at least two DRUs, each coupled to one of the DHU or the DEU. Again, the shared circuitry in the DHU performs bi-directional simultaneous digital RF distribution between the DHU and the at least two DRUs.

iii)

The third embodiment is a method for performing point to multipoint RF transport which includes receiving RF signals at a DHU and converting the signals to a digitised RF spectrum which is then optically transmitted to a plurality of DRUs, reception of that digitised RF spectrum at the DRUs, converting it to analog RF signals and transmitting the analog RF signals via a main RF antenna at each DRU.

150.

After the usual brief description of the drawings in figs 1-8, the Detailed Description begins in [0014] with a reminder that the specific embodiments shown in the drawings are illustrative and other embodiments may be utilised and structural changes made without departing from the scope of the present invention.

151.

Figure 1 provides a general overview of an exemplary point to multipoint digital transport system within a complex of TSBs. In this arrangement the wireless network 5 is coupled to the exterior to the Public Switched Telephone Network PSTN or a Mobile Telecommunications Switching Office or other switching office/network and to the interior to a Wireless Interface Device WID 10. This is the interface between the transport system and the wireless network – it can take various conventional forms and can have either a wired or wireless connection to the DHU 20, but none of the detail matters. The received RF signal at the WID is transmitted to the DHU 20 which digitises the signal. The digitised signal is optically transmitted, either directly or via one or more DEUs 30, to multiple DRUs 40 and 40’ respectively, via fibre optic cable but other carriers can be used.

152.

Both the DHU and DEU split signals in the forward path and sum signals in the reverse path. The specification explains that in order to accurately sum the digital signals in the reverse path, the data needs to arrive at exactly the same rate. All of the DRUs need to be synchronised so their digital sample rates are locked together. This is done by locking everything to a bit rate transmitted over the fibre. Professor Seeds commented that this was a conventional solution for optical communication systems, of which the skilled person would be aware.

153.

Figs 2 & 3 (which I need not set out) show two alternative embodiments where the interface between the WID and the DHU is described in more detail. Fig 2 has a bidirectional amplifier acting as the interface, whereas in figure 3 the DHU is connected directly to a base station 310. Otherwise, the systems shown are identical. The downlink RF signal is received at the DHU and is digitised and transmitted to multiple DRUs either directly or indirectly via a DEU. The uplink RF signals received at each DRU are digitised and transmitted to the DHU either directly or via a DEU via optical transmission lines. The reverse path signals are summed at the DHU, converted into analog signals and transmitted back to the base station (via the BDA in the fig 2 embodiment).

154.

The operation of the DHU is described in [0029]-[0038] by reference to Figure 4. Similarly, the operation and signal flow in a DRU is described in [0039]-[0044] by reference to Figure 5. Professor Seeds helpfully annotated these figures to show the signal flow and key steps:

155.

Thus in the forward/downlink path, the downlink signal in analog form is amplified, down-converted to IF in mixer 452, amplified, filtered and downconverted to baseband by mixer 460. A dither signal is added to reduce distortion on small signals caused by ADC quantising, and the signal is converted to digital form in ADC 464. The digital output from the ADC is parallel to series converted and used to modulate the optical transmitters 431-1 to 431-P.

156.

In the DRU, the downlink optical signal is converted to electrical form in receiver 501, followed by clock and bit recovery in 503 and serial to parallel conversion in the demux 505. The signal is converted to analog baseband in the DAC 509, then upconverted to IF in mixer 502. Following filtering and amplification, the IF signal is upconverted to RF in mixer 508, followed by further filtering and amplification before being fed to antenna 599 via the duplexer 547.

157.

In the uplink path, signals from the antenna 599 pass via the low noise amplifier 543 and level adjusting attenuator 539 to mixer 535 where they are converted to IF. After filtering and amplification they are converted to baseband by mixer 544, sharing common local oscillator 515 with mixer 502, have dither added and then are digitised in ADC 538 (which is mislabelled as a DAC but correctly described as an ADC in the specification). The ADC output is converted to serial form in 536 and converted to an optical signal in transmitter 532.

158.

Back in Figure 4, the received up-link signals are converted to electrical form in receivers 418-1 to 418-P, followed by clock and data recovery in 445-1 to 445-P. The resulting serial streams are converted to parallel in 441-1 to 441-P and added in 498. An overflow algorithm is applied to give an addition sum within the range of the DAC 494. The resulting analogue signal is up-converted from baseband to IF in mixer 492, which shares a common local oscillator (LO) with mixer 460. After filtering and amplification the IF signal is up-converted to RF in mixer 486, amplified, filtered and passed back to the base station.

159.

The digital expansion unit is described at paragraphs [0045] – [0048] by reference to the embodiment shown in Figure 6. The DEUs are located between the DHUs and the DRUs and provide 'hubs' for connecting multiple DRUs, so that the architecture becomes a cascaded star or tree architecture. Professor Seeds’ annotated version of Fig 6 looks like this:

160.

In the downlink path, the digitised signal (from the DHU or another DEU) is received at the optical receiver 651 which converts it to electrical form. Clock and bit recovery is performed by 653 which in turn applies the signal to a fan out buffer 607 which splits the signal X ways to drive X optical transmitters 655-1-655-X. Each of the resulting optical signals are then transmitted to DRUs or other DEUs. In the reverse uplink path, the DEU receives signals from multiple DRUs (or other DEUs) via optical receivers 669-1-669-X. Each of these signals is synchronised via the clock and bit recovery circuit 673-1-673-X and then de-multiplexers 671-1-671-X convert the signals to parallel form prior to addition in 665, which forms part of Field Programmable Gate Array (FPGA) 661, which also implements an overflow handling algorithm 663. The output signal is then fed to multiplexer 657 which adds framing and control information and converts the parallel signal to serial. The signal is then transmitted upstream to the DHU 659, either directly or via one or more DEUs.

161.

Professor Seeds commented (and I agree) that the skilled person would appreciate that the functionality of the DEU is a subset of the functionalities contained within the DHU, Figure 4. No additional functionality is included.

162.

The final embodiment of the system is shown in Figure 7 and briefly described at [0049]. This describes a situation where the WID (for instance, in the form of a microcell base station), is located at a distance from the TSB that is to be served by the system. In this arrangement, the DHU is located at the same site as the WID and is connected by single mode optical fibre to a DEU located at the TSB to be served. The system architecture is however identical to the general DHU/DEU/DRU configuration described above.

163.

The specification ends with four paragraphs under the heading Conclusions, the first three of which essentially repeat [0010]-[0012]. The fourth paragraph begins by essentially repeating [0014] but then continues with this passage:

‘For example, a digital remote unit is not limited to the receipt and summing and splitting and transmitting of digitized radio frequency signals. In other embodiments, the digital host unit is capable of receiving and summing analog radio frequency signals in addition to or instead of digitized radio frequency signals. As well, the digital host unit is capable of splitting and transmitting analog radio frequency signals in addition to or instead of digitized radio frequency signals. This application is intended to cover any adaptations or variations of the present invention.’

164.

Although neither side drew attention to this passage, it tends to confirm that the patentee was intending to claim in the broadest way, such that his claimed systems were not intended to be limited to those which dealt with only digital signals between DHU and DRUs.

165.

Standing back from the detail, it is clear what problem the patent addresses and what its solutions are. A form of digital DAS was known: digitisation of analog RF signals for transport over distance, with conversion back to analog after transport, was already known, as the Patent acknowledges. The problem addressed relates to the use of point-to-point architecture in TSBs, where for example 20 floors require 20 host RF to digital interface devices for 20 (or more) remote antenna units, plus associated optical or other cabling. In such a setup there is a lot of expensive equipment and cabling. Hence the need to incorporate the benefits of digital RF signal transport into a point to multipoint architecture.

166.

The patent provides two solutions. The first comprises a single host unit which transmits to multiple remote units, using a star arrangement. The second comprises a single host unit using a tree arrangement where the host unit transmits the digital signals either directly to the remote units or via what is termed an expansion unit, allowing further branching to a set of remote units. Describing the two solutions at this high level of generality oversimplifies, because EP850 contains details of how these solutions may be successfully implemented.

167.

Before I come to the claims and the issues of construction, I must deal with the contentions made by CommScope, via Dr Acampora’s evidence, as to supposed advantages of EP850. Dr Acampora stated that the Skilled Person would understand that EP850 has two key features:

i)

The first he termed the ‘Multi-Protocol Feature’. He said the EP850 system supports multiple protocols using a digital transport system but is ‘protocol agnostic’.

ii)

The second he termed the ‘Arbitrarily Extensible Feature’ i.e. that the system can be readily expanded incrementally using DEUs, so that DRUs can be added without changing the configuration of existing units.

168.

Professor Seeds was of the view that there is nothing in the EP850 specification which suggests to the Skilled Person that either of these features are central to the teaching of EP850. In fact, he considered the Skilled Person would recognise that these features were common characteristics of most DAS arrangements known by the Priority Date.

169.

As to the first feature, Professor Seeds explained that the core function of any digital DAS is to digitise a part of the RF spectrum. All DAS, whether analog or digital, are ‘protocol agnostic’ in the sense that they do not ‘care’ which protocol is being transported, provided they can deliver the required performance. In the abstract therefore, every DAS is multi-protocol.

170.

However, in the real world, the Skilled Person, reading EP850 and seeking to implement its teaching, would know there are limits to the contiguous bandwidth that could be ingested with the main limit stemming from A/D performance. Professor Seeds was of the view, which I accept, that by July 2000, the maximum contiguous bandwidth that could be digitised in a single channel would have been about 30MHz. In practice, the Skilled Person would be far more likely to consider digitising just those parts of the cellular band allocated to particular network operators. In order to do so, s/he would implement a multi-channel arrangement, with the channels limited to the relevant sections of the cellular band.

171.

As to the second feature, it is clear that the use of DEUs to expand coverage is an optional feature of EP850, to be taken advantage of in suitable situations. Dr Acampora described the use of scaling as a key enabler of extensibility. However, as Professor Seeds pointed out, EP850 does not make the link itself between scaling and the use of extension units.

Claims 1 and 7

172.

As broken down into convenient integers, claim 1 reads as follows. I have underlined the words and expressions the construction of which is in dispute:

1.1

A digital host unit (20, 420) to communicatively couple to a plurality of digital remote units (40, 540),

1.2

the digital host unit (20, 420) comprising a radio frequency interface to receive an original forward-path analogue radio frequency signal from a wireless interface device (10, 211, 310) and to communicate a reverse-path analogue radio frequency signal to the wireless interface device (10, 211, 310),

1.3

wherein each of the plurality of digital remote units (40, 540) receives a respective original reverse-path analogue radio frequency signal comprising a reverse-path radio frequency spectrum

1.4

and wherein each of the plurality of digital remote units (40, 540) generates respective reverse-path digital samples indicative of the respective original reverse-path analogue radio frequency signal received at that digital remote unit;

and characterized in that the digital host unit (20, 420) comprises:

1.5

an analog-to-digital converter (464) to convert the original forward-path analogue radio frequency signal to forward-path digital samples;

1.6

at least one transmission line interface (431-1, 431-2, 431-P, 418-1, 418-2, 418-P) to communicate the forward-path digital samples to at least one of the plurality of digital remote units (40, 540) and to receive the reverse-path digital samples from the plurality of digital remote units (40, 540);

1.7

a digital summer (498) to digitally sum corresponding reverse-path digital samples received from the plurality of digital remote units (40, 540) to produce summed reverse-path digital samples;

1.8

a digital-to- analogue converter (494) to convert the summed reverse-path digital samples to a reconstructed reverse-path analogue radio frequency signal.

173.

Similarly, claim 7:

7.1

The digital host unit (20, 420) of claim 1,

7.2

wherein a second plurality of digital remote units (40') are communicatively coupled to the digital host unit (20, 420) using a digital expansion unit (30, 630) that is communicatively coupled to the digital host unit (20, 420);

7.3

wherein each of the second plurality of digital remote units (40') receives a respective original reverse-path analogue radio frequency signal comprising the reverse-path radio frequency spectrum;

7.4

wherein each of the second plurality of digital remote units (40') generates respective reverse- path digital samples indicative of the respective original reverse-path analogue radio frequency signal received at that digital remote unit (40');

7.5

wherein each of the second plurality of digital remote units (40') communicates the respective reverse-path digital samples generated by that digital remote unit (40') to the digital expansion unit (30, 630);

7.6

wherein the digital expansion unit (30, 630) digitally sums corresponding reverse-path digital samples received from the second plurality of digital remote units (40') to produce summed reverse-path digital samples;

7.7

wherein the digital expansion unit (30, 630) communicates the summed reverse-path digital samples to the digital host unit (20, 420); and

7.8

wherein the digital host unit (20, 420) digitally sums the reverse-path digital samples received from the digital expansion unit (30, 630) with corresponding digital samples received from the plurality of digital remote units (40, 540).

Issues of Construction

174.

Before I discuss the various issues of construction which arise, I mention three unusual but related features of the way in which CommScope presented its arguments on construction. First, CommScope’s arguments were not related to and did not depend on anything said in the specification of EP850. Second, CommScope contended that the issues of construction were best considered in context, but the context which CommScope had in mind were the invalidity arguments, not the specification of EP850. In other words, CommScope was construing its claim so as to distinguish over Oh. Oh is not referred to in EP850, so there is no basis for assuming the patentee wrote his claim with Oh in mind and intended to distinguish his claim from Oh. Third, it will be seen that on perhaps the most critical construction issues, CommScope’s arguments are linguistic, depending on particular words used in the phrase in question. It remains to be seen whether the linguistic arguments coincide with or are contrary to a purposive approach.

(i)

‘to communicative couple’/ ‘communicatively coupled’

175.

It is important to assess issues of construction in the appropriate context. At its broadest, this means in the context of the patent as a whole, when read through the eyes of the skilled person armed with the common general knowledge.

176.

However, another relevant context is in the context of the claim in question. In that regard, claim 1 is an apparatus claim to the digital host unit with the attributes set out in the claim. Such a digital host unit does not actually need to be connected or communicatively coupled to any digital remote unit, but it must be suitable for such connection. Hence, the correct interpretation of ‘to communicatively couple’ is in the sense of the digital host unit must be suitable for such connection.

177.

By contrast, the apparatus in claim 7 requires that the digital remote units ‘are communicatively coupled’ to the digital host unit ‘using a digital expansion unit that is communicatively coupled to the digital host unit’. In other words, the digital host unit must actually be connected as required.

178.

As to the nature of the connection, the expression ‘communicatively coupled’ is deliberately broad, as long as the connection permits communication. The short point is that this expression embraces both direct and indirect connections.

(ii)

‘comprising’/ ‘comprise’

179.

These words bear their traditional meaning in patent claims. For example, the digital host unit in claim 1 must include the features which follow these words but can include other features and elements as well (x, y, z), in contradistinction to a claim which specified that the digital host unit consists of x, y z.

(iii)

‘an original forward-path analogue radio frequency signal’ / ‘an analog-to-digital converter to convert the original forward-path analogue radio frequency signal’

180.

I deal with these points together because CommScope make a linked submission in relation to the construction of both. They submit that the claim is limited to an arrangement where a single RF signal is ingested and processed by a single ADC. Thus, CommScope advocate a narrow construction in which the original forward-path analogue radio frequency signal must be the entirety of the signal of interest from the base station.

181.

For the reasons explained by Professor Seeds to the effect that it is very difficult to ingest a contiguous signal using a plurality of ADCs, I accept that a single ADC is used to convert each identifiable or individual RF signal supplied to the host unit. However, the claim is plainly not limited to an arrangement where the digital host unit contains only a single ADC. Claim 1 covers an arrangement where more than one RF signal is ingested, with each RF signal ingested being processed by its own ADC. No reason was identified why claim 1 should be limited as contended for by CommScope, other than CommScope’s desire to avoid Oh. Since Oh is not referred to in the Patent itself, as I have already mentioned there is no warrant that the Patent and claim 1 must be read as not claiming an Oh arrangement.

182.

In essence I agree with SOLiD’s submission that claim 1 is deliberately broad.

(iv)

‘a digital-to- analogue converter to convert the summed reverse-path digital samples to a reconstructed reverse-path analogue radio frequency signal’

183.

The same points above apply equally to these equivalent expressions in integer 1.8. Each ADC in the forward path has to be matched by a DAC in the reverse path so that however many individual RF signals are ingested in the forward path, the same number of reverse-path RF signals must be reconstructed from the DACs in the reverse path.

(v)

‘a respective original reverse-path analogue radio frequency signal comprising a reverse-path radio frequency spectrum’

184.

Finally, there was some discussion over the patentee’s use of ‘spectrum’ as opposed to ‘signal’ and whether that made any difference. The expression ‘reverse path radio frequency spectrum’ appears in claims 1, 7 and 9. In the specification:

i)

[0006] speaks of ‘Digitization of the RF spectrum prior to transport…’. That phrase more naturally refers to the analogue RF signal input to the system from the base station side, but it could also apply to the input from a DRU.

ii)

[0012] is perhaps the most pertinent paragraph, being the consistory clause for the method claim. It uses the phrase ‘the digitized radio frequency spectrum’ four times and it means what it says: the whole of the digitized signal.

iii)

[0026] contains the only other mention of ‘spectrum’. When describing Figure 3 it refers to ‘DHU 320 essentially converts the RF spectrum to digital in the forward path and from digital to analog in the reverse path.’ But immediately continues: ‘In the forward path, DHU 320 receives the combined RF signal from transmitters 323, digitizes the combined signal and transmits it in digital format over fibres …’. The former sentence is not literally true, in the sense that not the whole RF spectrum is converted. Instead it must mean whatever signal the DHU receives, whether in the forward or reverse path. To the extent that it matters, in my view, this could be a single signal or a combined signal.

185.

When the point was put to him, Dr Acampora agreed with the notion that in EP850 RF spectrum referred to the frequency band of interest, whereas RF signals were the information-carrying waveforms within that band. I was not convinced, however. Overall, the specification does not use ‘spectrum’ in any consistent way. Instead, the specification appears to use signal and spectrum interchangeably. CommScope agreed with this in its written closing. When addressing construction of the phrase ‘original forward path analog radio frequency signal’ (see construction issue (iii) above) SOLiD drew attention to the various uses of the word ‘spectrum’ which I have reviewed above and submitted that the patentee had chosen not to assert a requirement for the digitization of the entire spectrum or even of a multiplicity of signals and that it was enough for the DHU and the ADC to digitise a single RF signal. I have agreed with that submission and it is consistent with my conclusion on the use of the word ‘spectrum’ in EP850.

186.

Standing back from the detail, I agree with an overarching submission made by SOLiD which was to this effect. On reading EP850, it is apparent that CommScope thought they had invented the point to multipoint digital DAS (so far as claim 1 is concerned) and such a DAS including expansion units (so far as claim 7 is concerned). This is indicated by the problem to which the Patent is addressed and the solution presented. Consistent with this, one would expect the claims to be widely drawn and they are. Against this backdrop, the attempts by CommScope to argue for a narrow claim construction are plainly driven solely by a desire to avoid Oh and not by anything in EP850 itself. These are further reasons which reinforce my rejection of CommScope’s attempts, via the issues of construction above, to place a narrow scope on the claims.

VALIDITY

Legal principles

187.

The parties seemed to be in agreement that the issues of anticipation and obviousness raised in this case required an application of standard and well-known principles.

188.

To anticipate, the prior art disclosure must ‘plant the flag’ i.e. it must be a clear and unambiguous disclosure of all of the features of the claim.

189.

I have also reminded myself of certain dicta made by Pumfrey J. in Research in Motion v Inpro [2006] EWHC 70 (Pat):

i)

First, at [111]: ‘A claim lacks novelty if it covers something that formed part of the state of the art at the priority date’.

ii)

Second, at [112]: ‘The teaching of the specification, once construed, is a pure question of fact, as is what the skilled man would do with that teaching without the exercise of inventive ingenuity.’

iii)

Third, at [128]: ‘As ever, the question is what is explicitly disclosed and what also is necessarily implicit in the teaching. The skilled man must be taken to read documents in an intelligent way, seeking to find what is disclosed as a matter of substance.’ (my emphasis).

190.

For obviousness, the correct legal approach is that summarised in Actavis v ICOS [2019] UKSC 15 at [52]-[73] per Lord Hodge, referring to the structured approach in Pozzoli v BDMO [2007] EWCA Civ 588 at [14]-[23] per Jacob LJ and citing Kitchin J. in his well-known passage from Generics v Lundbeck [2007] EWHC 1040 (Pat) at [74].

191.

Since disclosure is such a major point, I also mention that CommScope referred me to the following passage from Philips v Asustek [2019] EWCA Civ 2230, per Floyd LJ at [61]. Even though stated in the context of obviousness, the notion that one cannot strip out inconvenient detail from the prior art applies also/especially in the context of anticipation:

‘The task for the party attacking the patent on the ground of obviousness is to show how the skilled person would arrive at the invention claimed from the disclosure of the prior art. If the invention claimed is, as it is here, a simple idea, then it is correct that this simple idea is the target for the obviousness attack. That does not mean, however, that the court is entitled to assume that the skilled person takes a different approach to the prior art, stripping out from it detail which the skilled person would otherwise have taken into account, or ignoring paths down which the skilled person would probably be led: see the passage from Pozzoli cited above. The nature of the invention claimed cannot logically impact on the way in which the skilled person approaches the prior art, given that the prior art is to be considered without the benefit of hindsight knowledge of the invention.’

THE PRIOR ART

192.

As indicated above, the single piece of prior art, Oh, is a Korean patent application entitled ‘Digital optical repeater’, filed and published in 1999. The original document is in Korean but we have worked from an agreed translation. As Professor Seeds commented, some of the language is a little awkward (as translated) but it does not detract from a clear understanding of what the document discloses to the skilled person.

193.

A key issue on Oh concerns the relationship between the ‘general teaching’ and the ‘preferred embodiment’, adopting those terms for the purposes of argument. CommScope treat the two as different and distinct, insisting that SOLiD’s case depends upon the preferred embodiment alone because, so CommScope submit, the general teaching does not disclose a plurality of remote units, nor digital summing in the host unit. For these reasons I need to explain something about the structure of Oh and the relationship between its general disclosure and its description of its preferred embodiment.

194.

It will also assist to have in mind the role which Oh plays in the various contentions put forward by SOLiD as to the invalidity of EP850:

i)

First, SOLiD contend that Oh anticipates claim 1 of the Patent, whether the disclosure is limited to a multichannel arrangement or whether Oh also discloses a single channel system. Since the arguments are different, it remains necessary to make findings as to what Oh disclosed to the Skilled Person.

ii)

Second, SOLiD’s fallback is that if, for some reason, Oh does not anticipate, claim 1 is nonetheless obvious over Oh.

iii)

Third, SOLiD acknowledge that Oh does not disclose a ‘digital expansion unit’ in claim 7, but SOLiD contend that claim 7 is nonetheless obvious over Oh.

195.

Oh is entitled ‘Digital Optic Repeater’. The Abstract explains what it does (emphasis added, NMS means Network Management System):

‘The present application discloses a digital optic repeater that enables the base station, which constitutes the mobile telecommunication network, and the optic repeater, which is installed in the radio wave shadow area, to convert the analogue intermediate frequency signals to the digital signals through the optical path for mutual transmission and reception. The analogue RF signals transmitted from the base station to the forward master unit of the master unit are converted to intermediate frequency signals and, after that, converted to digital signals, and said digital signals are transmitted to the slave unit along with the NMS, the control signals, through the optical line, said forward slave unit of the slave unit converting digital signals to analogue signals, converting them to RF signals, and transmitting them to the mobile terminals, thereby enabling the mobile terminals to receive clean signals free of noise. Conversely, the analogue RF signals transmitted from the mobile terminals to the reverse slave unit of the slave unit are also converted to intermediate frequency signals and transmitted to the master unit along with the NMS, the control signals, through the optical line, said reverse master unit of the master unit converting digital signals to analogue signals, and, after that, converting them to RF signals, and transmitting them to the base station, thereby enabling streamlined telecommunication. It also provides an additional effect of simplifying the construction of optic repeater by transmitting and receiving the NMS signals, which controls the slave operation, with no separate apparatus.’

196.

After introducing the drawings (and giving the reference numerals for the 6 main components of the system), the ‘Detailed Description of the Invention’ has a heading ‘The Object of the Invention’ and then the following sub-headings:

i)

‘Technical Field related to the Invention and Conventional Technology’

ii)

‘Technical Problem to be solved by the present invention’

iii)

‘Construction and Operation of the present invention’

iv)

‘Effect of the invention’,

followed by the Claims and the Drawings.

197.

Under the ‘Technical Field’ sub-heading, Oh starts by describing the problems with radio wave shadow areas for a mobile telecommunication system and the conventional solution, which was to install a first optic repeater at the base station and a second in the radio wave shadow area. The two repeaters are connected by an optical line and transmit/receive signals to/from each other through the optical line. The references to the need for amplification of the analogue RF signals due to the strength of the signals being ‘greatly decreased during transmission through the optical line’ indicates this prior art system is analogue throughout. The problem with this conventional arrangement is that noise exists in the RF signals which are amplified, and the amplified noise entails a problem that the signal to noise ratio is poor.

198.

Accordingly, under the ‘Technical Problem to be solved’ sub-heading, the present invention is said to solve these problems. Thus, the object of the invention is stated to be to provide a digital optic repeater that can maximise the efficiency of signal transmission in such a way that the optic repeater converts the intermediate frequency analogue signals to digital signals and transmits/receives them through the optical line.

199.

Then, under ‘Construction and Operation of the present invention’, Oh explains how to achieve that object with a general description which is slightly more detailed than anything hitherto. The optic repeater at the base station end is called the master unit which communicates with the second repeater called the slave unit which transmits/receives the signals transmitted/received to/from the master unit through the optical line with the mobile terminals.

200.

The master unit comprises:

i)

first means which convert the RF (analog) signals transmitted from the base station to IF signals and then to digital signals, and transmits those to the slave unit through the optical line; and

ii)

second means which convert digital signals received from the slave unit through the optical line to analogue signals and transmits them to the base station.

201.

The slave unit comprises:

i)

third means which convert the digital (IF) signals transmitted through the optical line to RF analogue signals, amplifies and transmits them to the mobile terminal;

ii)

fourth means which convert RF analogue signals received from the mobile terminals to IF digital signals and transmits them to the master unit through the optical line.

202.

What I have outlined so far is in the passage at page 2, lines 42-53. In page 2, lines 54-63, various additions are described: each of those four means has means that includes or separates NMS signals in and from the digital signals which are transmitted from a first control unit (in the master unit) to a second control unit (in the slave unit) and vice-versa. Each of the master and slave units comprise a forward and reverse unit.

203.

The passages I have so far described under the heading ‘Construction and Operation of the Present Invention’ are consistory clauses. Thus, page 2 lines 42-53 correspond to claim 1 of Oh and page 2 lines 53-63 are the consistory clauses for claims 2 and 3 of Oh.

204.

At this point, Oh says he turns to his preferred embodiment, which will be explained by reference to the drawings, which are then described in brief terms.

205.

Due to the arguments over Oh, it is relevant to note that by this point in the specification:

i)

there has been no mention of any ‘divider’ or what SOLiD called ‘the multichannel arrangement’.

ii)

the explicit disclosure so far has been of a master unit and a slave unit.

206.

The general structure of the specification from this point on is that, having explained the general ‘installation structure’ by reference to Figure 1, Oh then goes on to describe each of the units in three levels of detail by reference to one of the remaining figures, thus:

i)

The Forward Master unit with Figure 2;

ii)

The Forward Slave unit with Figure 3;

iii)

The Reverse Slave unit with Figure 4;

iv)

The Reverse Master unit with Figure 5.

207.

In Figure 1, the optical connection 50 joins the host or master unit 20 to two remote or slave units 30 which communicate with mobile terminals 40:

208.

The text which relates to Fig 1 in the specification is as follows:

‘…RF signals, which are transmitted through the RF cable from the base station 10 to the forward master unit 100 of the master unit 20 comprising the optic repeater 1, are converted to the intermediate frequency signals close to DC; mixed with the NMS signals transmitted from a first control unit 107; and transmitted to the forward slave unit through the optic line. Said forward slave unit 200 separates the NMS signals included in the intermediate frequency signals by a third means; transmits them to a second control unit 204; converts digital signals to the intermediate frequency signals, analog signals; converts the intermediate frequency signals to RF signals; and transmits them to the mobile terminals 40. In other words, The forward master unit 100 of the master unit 20 comprising said optic repeater 1 executes a first means that converts RF signals, analog signals, transmitted from the base station 10 to the intermediate frequency signals; converts them to digital signals; and transmits them to the slave unit 30 through the optic line 50.’

209.

Again, in view of the disputes over Oh, I pause at this point to note the following:

i)

Certain of the main components are neither marked in Figure 1 nor mentioned in this text, but they were previously identified on page 1, namely: 300 reverse slave unit and 400 reverse master unit.

ii)

The related point is that this passage does not describe the reverse path, even though it is clearly illustrated in Figure 1 and involves two slave units 30, as in the forward path.

iii)

In addition to the main components, this passage also makes reference to the control units 107 and 204 that feature in Figs 2 & 5 and Figs 3 & 4 respectively, in which they generate and process, respectively, the NMS control signals.

210.

Once Oh has finished describing Figure 1, he embarks on the description of the multichannel arrangement shown in Figures 2-5. Oh starts with the forward path in which Figure 2 features the processing in the master or host unit and Figure 3 the processing in the slave or remote unit. Figure 2, as helpfully annotated by Professor Seeds, with just one path or channel highlighted, looks like this:

211.

In Figure 2, the downlink analogue RF signal from the base station is first extracted in the duplex filter 101, amplified 102 and passed to the divider 103, which divides the RF signal into 4 parallel independent processing chains, allowing 4 allocated channels to be processed independently. Each channel is processed in the same way. Oh refers to the channels as frequency bands or Frequency Allocations (‘FA’).

212.

Each RF channel is converted to IF by analogue mixers 122, 124, 126 and 128. Different local oscillator frequencies are used so that a common IF frequency – Oh provides an example of 70 MHz which is a standard IF for radio systems – can be used (the local oscillators are shown in the block designated 110). After filtering and amplification the signals are converted to baseband by analogue mixers 152, 154, 156 and 158, using common local oscillator 105, filtered and amplified before being converted to a digital signal by A/D converters 182, 184, 186 and 188. The digital signals are then multiplexed in 104 and the multiplexed signal is then fed to four optical converters 192, 194, 196 and 198.

213.

The skilled person would notice certain obvious errors in and in relation to Figure 2. First, that the ADCs are wrongly labelled ‘D/A Converter’; second, each of the outputs is labelled Optic 1, when they would be understood as Optic 1 to N (to be consistent with the start of Figure 3, where the input is labelled Optic N); third, page 3 line 36 of Oh describes the output of mixer unit 150 as being 23 MHz or 1.5 MHz, but the skilled person would appreciate that the former should be 1.23MHz. Oh refers to these two signals as intermediate frequency signals. Professor Seeds chose to refer to them as baseband signals in order to distinguish them from the higher 70 MHz IF signals, though he acknowledged that not everyone would so describe them. Nothing turns on this.

214.

Figure 3 illustrates the signal flow through a forward slave unit.

215.

The incoming optical signal in the serial data stream, which has been sent by the downlink master unit according to the Figure 2 process, is received at a second optic converter 201, labelled Optic N, and is then de-multiplexed into digital signals representing each of the four channels, plus the NMS signal, by de-multiplexer 202. The 4 channel signals are converted to analogue signals by the D/A converters 212, 214, 216 and 218. The 4 analogue outputs of the 4 D/A converters which are at baseband are amplified 220 and up-converted to IF by analogue mixers 232-238, using common local oscillator LO 206, filtered by surface acoustic wave filters 240 amplified 250 and then up-converted to the required channel radio frequencies by analogue mixers 262, 264, 266 and 268. Different local oscillator frequencies are used so that a common IF frequency – Oh provides an example of 70 MHz which is a standard IF for radio systems – can be used. The local oscillators are shown in the block designated 270. The resulting RF signals are amplified and combined in an analogue combiner 290 before power amplification and passing to the antenna via duplexing filter 294.

216.

The reverse or uplink path starts in the slave or remote unit, illustrated in annotated Fig 4:

217.

In Figure 4, an RF signal is received from a mobile terminal, is passed through a duplexing filter and is then amplified 302. The amplified signal is then passed through an analogue power divider splitter 303 feeding 4 analogue mixers 312, 314, 316 and 318, one for each uplink channel. After filtering 330 and amplification 340 the IF signals are converted to baseband by analogue mixers 352, 354, 356 and 358 filtered, amplified and then converted to digital form in A/D converters 382, 384, 386 and 388.

218.

Again, the ADCs in Figure 4 are wrongly labelled as D/A converters, though correctly described in the specification. In any event, it is obvious to the Skilled Person from the signal flow that they must be A/D converters. The channel signals are multiplexed together with NMS signals from control unit 204 in multiplexer 307 to form a serial data stream which is converted to optical form in converter 308 and then conveyed to the reverse master unit by an optical link.

219.

Figure 5 shows the reverse master unit. Professor Seeds’ annotated Figure 5 looks like this, again with only one channel highlighted:

220.

Figure 5 does not show the incoming optical links, but there are four optical links each coming from a separate slave unit which are fed into optical converter units 412, 414, 416 & 418 to produce 4 serial data streams. These are demultiplexed to extract the NMS signals from each slave unit which are passed to the control unit 107, and four channel signals. The first channel signal from each slave reverse unit is shown as being passed to digital combiner 432. The second to fourth channel signals are likewise passed to digital combiners 434, 436 and what should be labelled 438 but is shown as 218 in error. The summed first signals in the same frequency band are then converted into analogue baseband form in DAC 442, before being amplified, upconverted to IF in analogue mixer 462, filtered by surface acoustic wave filters 470, further amplified and then upconverted again to its uplink RF channel frequencies by analogue mixer 492. The signal is then amplified again and fed to (analog) combiner 404 with the signals (similarly processed) from the other channels, further amplified in the high power amplifier 405, and fed via duplexing filter 406 to the antenna and base station. As Professor Seeds pointed out, this last connection to the base station is an alternative to the RF cable shown in Figure 1.

221.

In more detail, Oh explains that a plurality of slave units 30 are connected to the demultiplexer unit 420 via the optical converter unit 410. There are four slave units, so there are four optic converters and four demultiplexers in the demultiplexing unit 420. Oh explains that the IF signals transmitted from each slave unit through the optic converter are 52-bit signals. The output from the demultiplexers are four 12-bit IF signals which are combined to produce 14-bit signals which are converted in the DAC to intermediate cycle analogue signals.

222.

Having gone through all the detail of his preferred embodiment, the final section of the specification before the claims is entitled ‘Effect of the Invention’. Here, four benefits are identified, which I identify in the four sections of underlining I have added:

“As can be seen in the description above, the analog RF signals transmitted from the base station to the forward master part of the master unit are converted to intermediate frequency signals and converted to digital signals, which are transmitted to the slave unit along with the NMS, the control signals, via the optical line, said forward slave part of the slave unit converting digital signals to analog signals, converting them to RF signals, and transmitting them to the mobile terminals, thereby enabling the mobile terminals to receive clean signals free of noise. Conversely, the analog RF signals transmitted from the mobile terminals to the reverse slave part of the slave unit are also converted to intermediate frequency signals and transmitted to the master unit along with the NMS, the control signals, via the optical line, said reverse master part of the master unit converting digital signals to analog signals, converting them to RF signals, and transmitting them to the base station, thereby enabling streamlined communication. In addition, the mixer unit mixes the NMS signals, which controls the operation of the slave unit, with the digital signals with no separate device, enabling to transmit/receive to/from the master unit and the slave unit. And, by using digital combiner, there is an effect of the present invention that the construction is simplified, and the characteristics are improved.”

223.

As SOLiD pointed out, the description of the first three benefits does not depend on any multichannel arrangement. However, the benefit identified in the final sentence is likely to be taken by the Skilled Person to refer to a multichannel arrangement, as is signalled by the reference to the use of digital combiner(s).

224.

There are two final points to make about Oh. First, to the uninitiated, it might appear that there was some reason why there were four FAs or channels and four slave units but one is not dependent on the other. As Dr Acampora pointed out, the number of slave or remote units is dependent on the resolution of the digital combiners.

225.

Second, the Skilled Person would realise that Oh describes his invention at three levels of generality, notwithstanding the fact that Oh says ‘a preferred embodiment’ will be explained by reference to Figures 1-5. First, in what has been termed the ‘general disclosure’, Oh discloses an arrangement of a single master unit and a single slave unit. Second, in Figure 1, Oh discloses a single master unit and two slave or remote units. Third, by reference to Figures 2-5, Oh discloses the actual preferred embodiment of a multichannel system, with a master unit and four slave units.

What does Oh disclose to the skilled person?

226.

In terms of disclosure, I remind myself it is not limited to what the prior art document explicitly states. Disclosure can also be necessarily implicit. In other words, the disclosure is what is generated in the mind of the Skilled Person who is reading the document with interest with the CGK in mind.

227.

As I mentioned above, Dr Acampora’s main contention was that Oh only disclosed a multi-channel arrangement. However, when addressing the issue of what Oh disclosed in his first report, Dr Acampora managed to avoid mentioning or commenting on the passage in Oh at page 2, lines 42-63 (which I outlined in paragraphs 199-203 above) and, relatedly, the claims.

228.

When it came to CommScope’s Opening Skeleton Argument, it tackled that passage in this paragraph, which I set out because it needs deconstructing:

‘79. Oh’s alternative approach is to employ a digital rather than analogue optical repeater. Oh outlines the general operation of this digital optical repeater system in a thumbnail sketch on p1 lines 42-52. The system outlined there comprises a “master unit” (which we will call a host unit) and “slave unit” (which we will call a remote unit) connected by an optical line. On the forward path, the host unit receives RF signals from the base station, divides them into 4 FAs, down-converts each FA to an intermediate frequency, digitises them, then sends them over the optical line to the remote unit. The remote unit in turn “converts the intermediate frequency signals, the digital signals, transmitted through the optical line to analog signals, converts them to RF signals, amplifies and transmits them to the mobile” (p1 lines 48-50). On the reverse path, the remote unit “converts RF signals, analog signals, received from the mobile terminals to the intermediate frequency signals and again converts them to digital signals” before transmitting them back to the host unit over the optical line (p1 lines 50-52). The host unit then “converts the digital signals transmitted from the [remote unit] through the optical line to RF signals, analog signals, and transmits them to the base station” (lines 47-48).’

229.

This paragraph purports to be a mixture of quotation and summation from lines 42-52 of Oh, but this sentence has been inserted: ‘On the forward path, the host unit receives RF signals from the base station, divides them into 4 FAs, down-converts each FA to an intermediate frequency, digitises them, then sends them over the optical line to the remote unit.’ That, of course, is a summary of what is described later, from page 3 line 9 onwards.

230.

Having pointed this out in their written Closing, Counsel for SOLiD made the rather restrained submission that this risked ‘advancing a misleading summary of the general disclosure of Oh’. I agree. This piece of advocacy was less than impressive but it also confirmed the degree of concern on CommScope’s side as to the disclosure in Oh.

231.

It is clear that the preferred embodiment in Oh is a multichannel arrangement which involves four slave units and four remote antennas. Dr Acampora was of the view that the skilled person would recognise the preferred embodiment as a bespoke design for the South Korean CDMAOne network. CommScope submitted that Professor Seeds agreed that the Skilled Person would recognise the preferred embodiment as being a version of CDMAOne but that is not quite what he said. He agreed that the Skilled Person would recognise that 1.23MHz (or 1.25 MHZ with the guard bands) was the bandwidth of CDMAOne. In relation to the 800 MHz and 1800MHz spectrum allocations, he said these were used for a whole range of different cellular systems, so those frequencies alone do not point the Skilled Person to CDMAOne, nor away from it. In line with what Professor Seeds said, in my view the Skilled Person would see that the preferred embodiment deals with signals with a particular bandwidth and spectrum allocations, by way of example, which happen to be for CDMAOne in South Korea. None of this really matters.

232.

CommScope’s approach to the disclosure of Oh comprised the following strands:

i)

First, a very definite separation of the ‘general disclosure’ from the disclosure of ‘the preferred embodiment’.

ii)

Second, taking the document in a rather literal way, so that Figure 1 was bundled in with the detail of Figures 2-5, despite what it actually depicts. It was on this basis that CommScope insisted that the starting point for SOLiD had to be the preferred embodiment, in which the features were disclosed of a plurality of slave units and digital summing.

iii)

Third, a warning, founded on the quote from Philips v Asustek, that it was the worst type of hindsight to generalise out from a preferred embodiment, keeping features which are in the claim of the patent and discarding features which are integral to the embodiment but not inconsistent with the patent claim.

233.

It is true that in the passages in Oh down to page 2 line 63 (line 64 is where the specification says it turns to the preferred embodiment), there is no explicit disclosure of more than one slave unit nor, relatedly, any mention of any summing in the host unit. However, in my view, the Skilled Person would not see or have in mind CommScope’s sharp distinction between the ‘general disclosure’ and the preferred embodiment, not least because of Figure 1. The Skilled Person reads a document like Oh with interest, and considers the whole of the document. With some knowledge of how patent documents are written, but in any event from its technical content, the Skilled Person would understand that in what has been designated the ‘general disclosure’ (down to page 2 line 63) Oh provides the essence of his solution to the technical problem Oh is addressing. The Skilled Person would understand this essence to be applicable in a wide range of situations and certainly not limited to the precise details of the preferred embodiment, nor to an arrangement with a single slave unit.

234.

This is made clear by Figure 1 and the description of it. As I indicated above, Figure 1 (in which there are two slave units and two antennas) provides an intermediate level of detail and a bridge between the essence of the general disclosure and the detail of the preferred embodiment. Any Skilled Person looking at Figure 1 would understand the signals from the two slave units would have to be combined and this had to happen in the reverse path in the master or host unit.

235.

The next issue is how would this combining occur? SOLiD submitted that Figure 1 contains a general disclosure of multiple remote units and digital summing. Certainly, there is no explicit disclosure of digital summing in any of the text prior to page 3 line 9. As Professor Seeds said, you could sum in the analog domain, but he pointed out if you did so, that would not give you the benefit. By that he was referring to his previous answer concerning the benefit of Oh’s solution to the technical problem Oh identified: the solution is Oh’s digital optic repeater which gives a much better signal to noise ratio, because the transmission of digital signals involves far less noise.

236.

Professor Seeds’ answer leads to the conclusion that Figure 1 of Oh implicitly discloses to the Skilled Person digital summing in the master or host unit. This is confirmed by (a) the fact that the preferred embodiment plainly discloses digital summing in the master unit (b) the final sentence in the section headed “Effect of the Invention’: ‘And, by using digital combiner, there is an effect of the present invention that the construction is simplified and the characteristics improved’ and (c) the consideration that, in the context of Oh as a whole, the idea of summing the signals in the analog domain would, in my view, be completely counterintuitive to the Skilled Person who would fully appreciate the point made by Professor Seeds and more general considerations e.g. those mentioned in the last sentence of paragraph 77 above. So I find that Oh necessarily implicitly discloses digital summing in Figure 1.

237.

This conclusion can be tested by considering posing two (non-leading) questions to the Skilled Person after s/he has read and considered Oh. The first question might be: ‘What happens, in the reverse path, to the signals (which belong in a particular RF band) from each of the two slave units in Figure 1? In my view, the answer would be: ‘Well of course they are combined or summed.’ The second question would be: ‘And how are they combined or summed?’ and the answer would be: ‘Digitally, of course’. As I indicated above, it would not make any sense to the Skilled Person to sum the two signals in the analog domain in the context of Oh.

238.

The disclosure made by Oh to the Skilled Person therefore is more nuanced than CommScope would like. Oh discloses a particular multichannel arrangement in his preferred embodiment (which the Skilled Person would understand could be adapted for other multichannel arrangements as well, and which even Dr Acampora accepted could be just two channels) as well as his more general teaching as to the essence of his invention, and the intermediate disclosure in Figure 1.

239.

As Professor Seeds indicated, the Skilled Person would appreciate that the teaching of Oh can be implemented as a single channel system with multiple slave units – this, after all, is what is shown in Figure 1.

240.

The Skilled Person would understand the purpose of the general disclosure: it was to highlight the essential points of his invention. These essential points are the solution to the problem which Oh explicitly addresses. What maximises the efficiency of the signal transmission is the basic teaching of converting the IF signals from analog to digital, sending the digital signals over the optic line and converting them back to analog after that transmission. The point here is that the Skilled Person would understand this teaching as of general application: both to a single channel arrangement or to a multi-channel arrangement.

241.

Although digital summing is a point of significance in the circumstances of this case, Oh does not consider it necessary to mention it in the context of Figure 1, (although it is covered in the more detailed description of the multichannel arrangement because individual functional components are described). The reason for that is because the Skilled Person would, in my view, automatically understand that the signals were to be digitally summed.

242.

Accordingly, I find that there is a clear and unambiguous disclosure in Oh of a single channel digital DAS involving more than one slave unit in which the signals in the reverse channel are digitally summed.

Does Oh anticipate claim 1 of EP850?

243.

Dr Acampora’s view was that there were fundamental differences between EP850 and Oh. He took the view that EP850 discloses and claims a digital DAS that processes an input RF signal in a single pipeline and hence is entirely agnostic to the content being carried in that RF signal. His view of Oh was that it disclosed a system designed specifically around the South Korean wireless network standard and is implemented so as to split an input RF signal into pre-defined constituent FAs. The equipment dedicated to each FA discards the remainder of the input RF signal and processes its FA in its own separate processing pipeline.

244.

The findings I have already made mean that Dr Acampora was wrong on these points concerning disclosure and claim scope. Although claim 1 of EP850 covers a single channel arrangement it is not limited to such an arrangement. The disclosure of Oh is not limited to the preferred embodiment. To the Skilled Person, Oh discloses both a multi-channel arrangement, a single channel arrangement and one involving two slave units. Finally, as Professor Seeds explained, any DAS is agnostic as to the RF signal it ingests.

245.

In light of the above, I can state the consequences succinctly. As I have construed claim 1 of EP850, and as I have assessed the disclosure of Oh, it anticipates claim 1. Even on CommScope’s construction of claim 1 (in effect that it claimed a single-channel system), on my findings as to the disclosure of Oh, claim 1 is anticipated.

Was claim 1 of EP850 obvious over Oh?

246.

This question only arises if claim 1 of EP850 was not anticipated by Oh, for some reason.

247.

One possible reason is because the disclosure of digital summing is not clear and unambiguous. However, for the reasons stated above, it was obvious to digitally sum the signals in the reverse path.

248.

A second possible reason is because Oh only disclosed a multi-channel arrangement, as Dr Acampora contended, and that I am also wrong as to the construction of claim 1 and it is limited to a single-channel system. Let me make those assumptions. I have set out the differences which Dr Acampora identified between Oh and EP850. In terms of the steps Dr Acampora identified as required to get from his understanding of Oh into claim 1 of EP850, he first explained that the divider and the multiple parallel processing pipelines were an integral aspect of the Oh system. He stated there was no hint in Oh that a single pipeline was desirable or even possible. He said there was no other motivation that would have suggested to the Skilled Person that the multi-channel arrangement in Oh should be dispensed with and that Oh should be modified to implement a single processing pipeline.

249.

Following that, Dr Acampora managed to identify no less than I think 20 changes required in the forward path direction and 21 changes in the reverse path direction. These changes were discarding individual components. He also stated that the Skilled Person would then need to re-specify the components used in the single processing pipeline in both the host unit and the remote units to accommodate whatever bandwidth signal was intended to be processed by the modified Oh system.

250.

All of this reasoning from Dr Acampora was either wrong or very significantly overstated.

251.

First, in terms of motivation, Professor Seeds gave examples of applications where the Skilled Person would be called upon to implement a single channel system e.g. emergency service radio systems, where a lot less bandwidth is required than for cellular communications and yet the solution of a digital DAS connecting to multiple remote units would still be attractive. An emergency service radio system on, for example, the Bakerloo line was one real-life example given.

252.

Second, in terms of the steps required, Dr Acampora grossly overstated the task. As Professor Seeds indicated, the Skilled Person would be able readily to create a single channel system from Oh’s preferred embodiment simply by dispensing with the components required for the other 3 channels. Dividers and combiners would be required if, as was very likely, the Skilled Person required more than one remote unit for his application. All the changes required would have been entirely obvious to do, on those assumptions. Finally, in terms of respecifying the components, that was well within the skills of the Skilled Person. All these steps were trivial.

253.

Accordingly, on any of the assumptions I identified above, claim 1 of EP850 was obvious over Oh, in my judgment.

Was claim 7 of EP850 obvious over Oh?

254.

Professor Seeds was of the view that claim 7 was obvious over Oh, partly due to the fact that he considered expansion units to be CGK (as I have found) and partly (as I understood matters) because many network architectures involving expansion units were obvious, including the particular arrangement in claim 7.

255.

To my understanding, Dr Acampora identified two reasons why he contended claim 7 was not obvious over Oh. The first reason was that the Skilled Person was not aware of any extension units at all, a point I have rejected. The second reason is somewhat more involved.

256.

In his first report, Dr Acampora stated that the Skilled Person would not understand the Oh system to be limited to only four remote units and would understand that the resolutions of the digital combiners in Oh are set by reference to the number of remote units. He went on to state that when designing an Oh system, the Skilled Person would understand that the output resolution of the digital combiners would be specified relative to the total number of remote units attached and that this would also impact the specification of components in the host unit downstream from the digital combiners, specifically the DACs. Professor Seeds agreed with this, but disagreed with the final step in Dr Acampora’s reasoning, which was that the DACs would need to support as an input the bit resolution of the summed digital signal output by a digital combiner. Although that was what was done in Oh, Professor Seeds said that the skilled person would know that this was just one approach to dealing with overflow and would appreciate that as the number of inputs increased this would cease to be an effective approach, and instead the input to the DAC would need to be limited or scaled, in line with standard CGK approaches to dealing with overflow. I accept Professor Seeds’ evidence on this point.

257.

However, reverting to Dr Acampora’s views, because of what he explained (as I have set out above), he said it would not have been obvious to implement an expansion unit starting from Oh, but, as I understand it, his argument was founded on his view as to the ways of dealing with overflow which were CGK. I rejected his view above.

258.

Furthermore, Professor Seeds stated that expansion units and the advantages they provide to DAS systems were well known to the Skilled Person who was reading Oh at the Priority Date, so that implementing the Oh system in an environment where the use of the expansion units would have been advantageous (i.e. to reduce the amount of cabling required), the Skilled Person would have done so and no inventive effort would have been required. Professor Seeds explained his reasons in detail.

259.

In his third report, Dr Acampora was asked to assume that the LGCell system and its expansion hub was CGK and then reconsider his views on claim 7. On this basis, Dr Acampora considered there was a fundamental difference between the ‘double star’ topology exemplified by the LGCell system and the topology of claim 7 of EP850. He illustrated this in the following diagram:

260.

Dr Acampora reasoned that even if the Skilled Person had the idea of modifying Oh to implement a double star topology, and assuming the LGCell expansion hub was CGK, the Skilled Person would still arrive at a different topology to the Claim 7 Network Topology i.e. one in which all the remote units were connected to the master unit via an expansion unit. Accordingly, he was of the view that the Claim 7 Network Topology was not obvious.

261.

He also stated that a key difficulty in implementing the Claim 7 Network Topology starting from Oh concerned summation overflow. Although Dr Acampora gave an example in which he described the changes required as ‘subtle’ and ones which would not be apparent to the Skilled Person, this, in my view, was more nonsense. It depended on the Skilled Person not being able to cope with digital summation of signals and being unaware of the various CGK methods of dealing with overflow.

262.

Dr Acampora was correct as to the topology required by claim 7. However, as Professor Seeds said, the Skilled Person’s implementation of Oh’s teaching would depend on the nature of the installation sites targeted for the DAS product, and the Skilled Person would adopt a network architecture appropriate for the types of area to be covered by the DAS. For example, with a master unit positioned in the basement of a TSB, it would be appropriate to have slave or remote units positioned throughout the basement which were directly coupled to the master unit, whereas for the higher floors in the building, it would be appropriate to run a connection to one or more expansion units on each floor which were then connected to slave or remote units spread throughout that floor. The point is that potentially a large number of arrangements were obvious using expansion units, of which claim 7 claims just one class of them. I therefore find that claim 7 was obvious over Oh. Once again, Dr Acampora grossly overstated the difficulties.

263.

I am not sorry to invalidate EP850 in view of Oh as prior art. Although each document drew attention to different bits of detail, it is worth pointing out that the disclosure in Oh was, in many respects, far more insightful and sophisticated than the disclosure in EP850. CommScope sought to exploit that to their advantage, contending that Oh came with all the baggage disclosed in the preferred embodiment but, for the reasons explained above, CommScope’s contention was only possible if one ignored the more general teaching in Oh and did not read the document as the Skilled Person would have done. Furthermore, as EP850 itself acknowledged, a digital DAS was already known (but I note not acknowledged to be CGK) but only, according to EP850, in a point to point architecture and not in a point to multipoint system. The problem solved by EP850 was very limited in its scope.

INFRINGEMENT

264.

On the basis of the construction(s) contended for by SOLiD or those most recently asserted by CommScope, no issues on infringement arose. I need not say anything further.

CONCLUSIONS ON EP850

265.

For all the reasons explained in this judgment, I find that if EP850 had been valid, SOLiD’s Genesis system would have infringed. Since I find that EP850 is invalid and must be revoked, SOLiD has not infringed.

THE APPLICATION TO AMEND EP626

266.

This application to amend comes before the Court in unusual circumstances. EP626 relates to a digital DAS with daisy chain topology. By the Amendment Application, issued on 1 September 2021, CommScope seek to add a further feature to independent claims 1 and 9, namely ‘sectorised antennas’. A cell is divided into sectors and each sector has its own antenna. In the usual way, the amendments are proposed ‘to further distinguish over the prior art’ which SOLiD had pleaded against EP626 by way of counterclaim. The amendments are sought unconditionally.

267.

Shortly after the Amendment Application was issued, the parties agreed terms on which to settle the EP626 part of the action which resulted in two Orders made by HHJ Hacon on 21 September 2021. The relevant effect of those Orders was that each side undertook to carry out a staged discontinuance of the relevant claims, CommScope undertook not to sue on EP626 in the UK in respect of any SOLiD product described in the Amended PPD, SOLiD undertook not to oppose CommScope’s unconditional application to amend, and the Amendment Application for both EP850 and EP626 was ordered to be heard at the trial of EP850.

268.

It is that last provision which gives rise to certain difficulties, but I must first complete the chronology of events. The Comptroller sent a fairly lengthy report on the proposed amendments by letter dated 26 October 2021. CommScope’s initial reaction was to suggest to SOLiD that it would withdraw its application to amend. The parties were unable to agree whether the EP626 amendment application was still live or not, so the issue came before me at the PTR on 23 November 2021. Having heard brief argument, I directed that the Amendment Application was to be heard at the EP850 trial, in accordance with the agreed direction contained in the Order of HHJ Hacon.

269.

Following that direction, CommScope sought to amend their Statement of Reasons to address certain points of clarity and served Dr Acampora’s fourth report to address certain allegations of added matter.

270.

In these circumstances, the issues I have to determine were identified by CommScope as follows:

i)

Whether the Court has jurisdiction to grant the amendment application?

ii)

Whether the proposed amendments are allowable?

iii)

If they are not, what should happen to EP626?

271.

Notwithstanding SOLiD’s undertaking not to challenge the amendment application, I did hear brief submissions from Mr Cronan for SOLiD, principally on the first and third points. He urged me to find jurisdiction, disallow the amendments and to declare EP626 invalid as a result.

Jurisdiction

272.

Section 75 of the Patents Act 1977 is the provision which gives the Court jurisdiction to allow amendment of a patent. Section 75(1) provides:

‘(1) In any proceedings before the court or the comptroller in which the validity of a patent may be put in issue the court or, as the case may be, the comptroller may, subject to section 76 below, allow the proprietor of the patent to amend the specification of the patent in such manner, and subject to such terms as to advertising the proposed amendment and as to costs, expenses or otherwise, as the court or comptroller thinks fit.’

273.

Section 74(1) contains a list of those proceedings and section 74(2) makes clear that this is a closed list:

‘(1) Subject to the following provisions of this section, the validity of a patent may be put in issue—

(a)

by way of defence, in proceedings for infringement of the patent under section 61 above or proceedings under section 69 above for infringement of rights conferred by the publication of an application;

(b)

in proceedings in respect of an actionable threat under section 70A above;

(c)

in proceedings in which a declaration in relation to the patent is sought under section 71 above;

(d)

in proceedings before the court or the comptroller under section 72 above for the revocation of the patent;

(e)

in proceedings under section 58 above.

(2)

The validity of a patent may not be put in issue in any other proceedings and, in particular, no proceedings may be instituted (whether under this Act or otherwise) seeking only a declaration as to the validity or invalidity of a patent.’

274.

CommScope point out that there were proceedings within s74(1)(a) and (d) but those proceedings have been discontinued. CommScope also submit (correctly) that there is no authority directly on point, but they drew my attention to the following authorities by way of possible guidance.

275.

First, Lever Bros & Unilever’s Patent (1955) 72 RPC 198 (CA) which was a case under s30(1) of the 1949 Act, under which the Court had power to permit amendment in “any action for infringement of a patent or any proceedings before the court for the revocation of a patent”. The Court of Appeal held that the jurisdiction to permit amendment was lost as soon as proceedings for revocation were compromised (there was no infringement claim).

276.

Second, Lars Eric Norling v Eez-Away [1997] RPC 60 was a case under the old (pre-2005) version of s75(1), under which the Court had power to permit amendment in “any proceedings … in which the validity of a patent is put in issue”. In that case, the defence and counterclaim based on validity were withdrawn but the claim for infringement was continuing to trial. Jacob J held that s75(1) had been activated when validity was put in issue, and was not lost when validity ceased to be in issue (see p164 lines 37-41).

277.

Jacob J distinguished Lever Bros on the basis of differences in wording between s30(1) of the 1949 Act and s75(1) of the 1977 Act. However, as CommScope pointed out, s75(1) has since been amended and they submit it is now much closer to s30(1) of the 1949 Act.

278.

Jacob J was able to find that the Court retained power to permit amendment because there remained extant, continuing proceedings (i.e. the infringement proceedings) in relation to the relevant patent in which validity had been put in issue, by way of defence. So the Court had power to permit amendment in those proceedings.

279.

Having considered this issue carefully, I have come to the conclusion that the Court does not have jurisdiction to decide this Amendment Application, for the following reasons.

280.

First, when the application to amend was issued there were proceedings in which the validity of EP626 was in issue, so the Court had jurisdiction at that point. Subsequently, CommScope served a notice of discontinuance of its infringement claim on 22 September 2021 and SOLiD served a notice of discontinuance of its counterclaim for invalidity two days later. There was no qualification to either notice of discontinuance: each concerned the whole of the relevant claim or counterclaim relating to EP626. Thus, by the time of the PTR and the trial, there were no extant proceedings in which it can be said that the validity of EP626 was or may be put in issue.

281.

Second, the fact that HHJ Hacon on 21 September 2021 was invited to and did make an Order by consent directing that the application to amend EP626 should be heard at the trial of EP850 does not, it seems to me, carve out an exception to either notice of discontinuance, which as I have said were unqualified. Those notices put an end to the application to amend. As the Court of Appeal in Lever Brothers indicated, the decision in that case might have been different if the settlement had been expressed to be conditional upon some specified amendment to the specification being approved. So too in this case. However, the mere fact of the direction was not sufficient to create a suitable condition, in my view, especially since it emerged that the parties were not in agreement as to its effect, and it is not possible retrospectively to amend one of the notices of discontinuance.

282.

Third, in these circumstances although it is tempting to decide the amendment application, having heard full argument on it, I do not see there is any wriggle room to give the Court jurisdiction.

283.

Fourth, there is no reason to strain to find jurisdiction. CommScope will be able to bring their application to amend before the Comptroller and would be well advised to do so quickly. SOLiD are not at risk due to the provisions in the Consent Order.

284.

Finally, there remains a possible argument which CommScope mentioned. One possible solution might be to regard the relevant date as the date on which the application was made: on that date the Court did have jurisdiction under s75(1). As CommScope also pointed out, the problem with that solution is that s75(1) is directed to that time when “the court … may … allow the proprietor of the patent to amend”. It would appear that it is necessary to establish jurisdiction both when an application to amend is first made and when it is determined by the Court. In the ordinary case, there is no problem but in this case I have found the jurisdiction was removed by the notices of discontinuance.

285.

In view of my conclusion on jurisdiction I will not lengthen this judgment further with a discussion of the various points on whether the proposed amendments should be allowed. Furthermore, if, as I envisage, a Hearing Officer acting for the Comptroller will have to decide whether these amendments should be allowed, I think he or she should take their decision uninfluenced by anything said by me.

286.

The third point is also academic, but I think it will assist if I say something about it. SOLiD argued that because this amendment application was unconditional, if it failed, the necessary consequence had to be that EP626 should be revoked. However, the situation is not quite so simple.

287.

CPR 63.10 (1) specifies that an application under s.75 of the Act must be made by application notice. Then CPR63.10 (2) specifies that:

‘(2) The application notice must–

(a)

give particulars of–

(i)

the proposed amendment sought; and

(ii)

the grounds upon which the amendment is sought;

(b)

state whether the applicant will contend that the claims prior to the amendment are valid; and

(c)

be served by the applicant on all parties and the Comptroller within 7 days of it being filed.’

288.

In my experience, although litigants dutifully specify whether their amendments are sought on a conditional basis or an unconditional basis, the requirement of CPR 63.10 (2)(b) is often not met. It is often assumed, as SOLiD assumed in this case, that if amendments are sought unconditionally, that implies an acceptance that the patentee is not contending the claims in their unamended form are valid, although sometimes amendments may be sought unconditionally in an attempt to cut down on the issues.

289.

In this case, SOLiD’s assumption was confirmed by statements made in solicitor’s correspondence as to the consequences of the amendments not being allowed, later withdrawn.

290.

However, Mr Abrahams QC reminded me of what occurred in Ferag v Muller Martini [2007] EWCA Civ 15 where, even though the application to amend was made on an unconditional basis, the patentee appellant was permitted to withdraw its application to amend having succeeded on appeal in overturning the first instance findings that the patent was invalid, not infringed and not saved by the proposed amendments. At [113] Jacob LJ said this:

‘It is true the application was made unconditionally, but there was no concession that the unamended claim was invalid, so it is difficult to see what ‘unconditional’ meant.’

291.

In future, if an application to amend is received which does not comply with CPR 63.10(2)(b), the recipient would be well advised to insist on compliance so the position is clear. In this case, if I had decided not to allow the amendments, I do not think it would have been right to revoke EP626 in the absence of a clear and considered concession to that effect from CommScope.

Commscope Technologies LLC v SOLiD Technologies, Inc.

[2022] EWHC 769 (Pat)

Download options

Download this judgment as a PDF (2.2 MB)

The original format of the judgment as handed down by the court, for printing and downloading.

Download this judgment as XML

The judgment in machine-readable LegalDocML format for developers, data scientists and researchers.