Skip to Main Content
Alpha

Help us to improve this service by completing our feedback survey (opens in new tab).

Koninklijke Philips NV v Asustek Computer Incorporation & Ors

[2018] EWHC 1224 (Pat)

Neutral Citation Number: [2018] EWHC 1224 (Pat)Case No: HP-2015-000063

IN THE HIGH COURT OF JUSTICE

BUSINESS AND PROPERTY COURTS

INTELLECTUAL PROPERTY LIST (CHANCERY DIVISION)

PATENTS COURT

Rolls BuildingFetter Lane, London, EC4A 1NL

Date: 23 May 2018

Before :

MR JUSTICE ARNOLD

Between :

KONINKLIJKE PHILIPS NV

Claimant

- and -

(1) ASUSTEK COMPUTER

INCORPORATION

(2) ASUSTEK (UK) LIMITED

(3) ASUS TECHNOLOGY PTE. LTD

(4) HTC CORPORATION

(5) HTC EUROPE CO. LTD

Defendants

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Mark Vanhegan QC and Adam Gamsa (instructed by Bristows LLP) for the Claimant

Thomas Hinchliffe QC and Joe Delaney (instructed by Taylor Wessing LLP) for the ASUS Defendants and(instructed by Hogan Lovells International LLP) for the HTC Defendants

Hearing dates: 27, 30 April, 1-3, 9 May 2018

- - - - - - - - - - - - - - - - - - - - -

Approved Judgment

I direct that pursuant to CPR PD 39A para 6.1 no official shorthand note shall be taken of this Judgment and that copies of this version as handed down may be treated as authentic.

.............................

MR JUSTICE ARNOLD

MR JUSTICE ARNOLD :

Contents

Topic

Paragraphs

Introduction

1-18

The witnesses

8-18

Mr Edwards

8-11

Mr Gould

12-18

Technical background

19-127

Mobile telecommunication standards

20-68

Standard setting

27-32

Elements of a mobile telecommunications system

33-39

OSI seven-layer model

40-51

Duplexing schemes

52-54

Multiple access schemes

55-56

Functions of the radio transmission chain

57-62

Repetition coding and channel coding

63-66

Error control strategies

67-78

Error concealment

68

FEC

69-70

ARQ

71-75

Comparison of FEC and ARQ

76-77

HARQ

78

Noise and interference

79-83

Signal transmission and detection

84-87

Modelling the effect of noise

87

Probability of error

88

Power control

91-103

Near-far problem

95-98

Slow fading

99

Multipath (fast) fading

100-103

Power control techniques

104-109

UMTS Release 4

110-119

Development of HSDPA

120-126

The problem to which the Patent is addressed

127

The Patent

128-145

Technical field

128

Background

129-133

Disclosure of the invention

134-136

Modes for carrying out the invention

137-145

The claims

146

Construction

147

The skilled person

148

Common general knowledge

152-174

cdmaOne

153

Differential gain on channels and field

154

Differential powers on binary antipodal signalling

155

Power control of uplink channels in UMTS

156-157

TS 25.308 and TR 25.855

158-162

Soft handover in HSDPA

163-172

Agreed key points

173

System design

174

Motorola 021

175-193

Obviousness over Motorola 021

194-202

Difference between Motorola 021 and Claim 10

194

Primary evidence

195-198

Secondary evidence

199-201

Conclusion

202

Shad

203-225

Obviousness over Shad

226-267

Difference between Shad and claim 10

226

Primary evidence

227-265

The Dutch decision

266

Conclusion

267

Conclusion

268

Introduction

1.

These proceedings concern three patents owned by the Claimant (“Philips”): European

Patent (UK) No. 1 440 525, European Patent (UK) No. 1 685 659 and European Patent (UK) No. 1 623 511. Philips has declared that these patents are essential to the European

Telecommunications Standards Institute (ETSI) Universal Mobile

Telecommunications System (UMTS) standard (“the Standard”), in particular the sections of the Standard that relate to the operation of the system known as High Speed Packet Access (HSPA).

2.

The Defendants fall into two groups: the First, Second and Third Defendants (“the ASUS Defendants”) and the Fourth and Fifth Defendants (“the HTC Defendants”). Both the ASUS Defendants and the HTC Defendants sell HSPA-compatible mobile phones. Philips alleges infringement of the patents by reason of their essentiality to the relevant versions of the Standard.

3.

By a consent order dated 12 April 2017 it was agreed that the technical issues relating to the patents would be tried in two separate trials: Trial A concerning the validity and essentiality of EP (UK) 1 440 525 (“the Patent”) and Trial B concerning the validity and essentiality of the other two patents. Further technical issues that have subsequently emerged will if necessary be tried in a third trial, Trial C. Issues relating to Philips’ undertaking to ETSI to grant licenses on FRAND terms will if necessary be addressed in a fourth trial, Trial D.

4.

This judgment concerns the validity of the Patent following Trial A. The Patent is entitled “Radio Communications System”. There is no challenge to the earliest claimed priority date of 19 October 2001 (“the Priority Date”). At trial there was no issue as to essentiality or infringement. The Defendants advanced a common case contending that the Patent was invalid for obviousness over two items of prior art:

i)

Document TSGR1/R2-12A010021 entitled “Control Channel Structure for High Speed DSCH (HS-DSCH)”, a contribution submitted to the 3GPP TSG-RAN

Working Group 1 and 2 ad hoc meeting in Sophia Antipolis, France on 5-6 April 2001 by Motorola (“Motorola 021”);

ii)

Document 3GPP2/TSG-C C50-20010709-024 entitled “Optimal Antipodal Signaling”, a contribution submitted to the 3GPP2 TSG-C meeting in Montreal, Canada on 9-13 July 2001 by Faisal Shad and Brian Classon of Motorola (“Shad”).

5.

Although Philips had applied conditionally to amend the claims of the Patent, it was common ground at trial that, given the way in which the Defendants put their case on obviousness, it was unnecessary to consider the proposed amendments. It was also common ground that it was only necessary to consider granted claim 10.

6.

There was no dispute between the parties as to the applicable legal principles, which are well established. Accordingly, there is no need to set them out in this judgment.

7.

Notwithstanding the limited number, and narrowness, of the issues, the parties filed a substantial volume of evidence: each expert witness served four reports running to a total of some 325 pages (although that includes some pages dealing with issues which fell by the wayside) and four files of exhibits, there were two additional files of crossexamination documents and each expert was cross-examined for a day and a half. The parties also filed a substantial volume of submissions: Philips’ written closing submissions ran to 206 paragraphs and 88 pages, while the Defendants’ written closing submissions ran to no less than 369 paragraphs and 129 pages. I have taken all this material into account, but I do not consider it necessary to refer to all of it in this judgment. Rather, I propose to concentrate upon what I consider to be the more important points and to ignore some of the more tangential ones.

The witnesses

Mr Edwards

8.

Philips’ expert was Keith Edwards. He received a degree in Electronic Engineering from the University of York in 1983. After graduation, he worked for Dowty before joining Standard Telephone and Cables (STC). At STC, he worked on military equipment. STC was acquired by Nortel in 1991. Following the acquisition, Mr Edwards worked on commercial telecommunications projects, including extended range base stations for GSM and then on development of a Fixed Wireless Access system, which is similar to a mobile telephone cellular network, but on a more limited geographic scale and without handover capability. In 1996 Mr Edwards became a manager in Nortel’s Advanced Technology division responsible for a team of six to eight people working on advanced wireless feasibility studies, including new voice coding protocols and advanced TDMA techniques.

9.

Mr Edwards began work on UMTS towards the end of 1998, in particular on the wideband TDD mode. He followed the development of both the TDD and FDD standards closely and participated in four RAN 1 (physical layer) and RAN 4 (radio resource management) standards meetings between 1999 and 2000. Mr Edwards then worked on the early stages of LTE (4G) technology and attended standards meetings in 2006 and 2007.

10.

Since 2009, Mr Edwards has worked as a consultant in the telecoms field, which has included providing advice in relation to intellectual property aspects. He also lectures

on telecommunications technology for the Open University. He is the named inventor on over 20 granted US patents with further applications pending.

11.

Counsel for Philips pointed out that a number of questions were put to Mr Edwards in cross-examination on a false basis, but Mr Edwards was not tripped up by those questions and so it does not matter. Counsel for the Defendants submitted that Mr Edwards was very well versed in Philips’ case and determined to defend it. I do not accept this submission. Mr Edwards appeared to me to be doing his best to assist the court, and did make certain concessions. The cogency of the opinions he expressed is a separate question to his performance as a witness. I will consider those in context.

Mr Gould

12.

The Defendants’ expert was Peter Gould. Mr Gould obtained a BTEC National Certificate (ONC) and a BTEC Higher National Diploma (HND), both in Engineering, during an apprenticeship with the Ministry of Defence Sea Systems Controllerate from 1984 to 1988. In 1991, he received a First Class degree in Electronic Engineering from the University of Southampton, sponsored by the Ministry of Defence. On graduation, he joined MAC Ltd, which is a consulting and product development company specialising in mobile communications. He has remained there ever since. Mr Gould has been involved in GSM, cdmaOne and UMTS consulting projects, as well as projects in TETRA (Terrestrial Trunked Radio), iDEN (integrated Digital Enhanced Network) and WiMAX (WorldWide Interoperability for Microwave Access). In 1993 he led a project for Qualcomm comparing GSM and cdmaOne.

13.

In 1995 Mr Gould became involved in presenting MAC’s training courses (including in respect of GSM, cdmaOne and UMTS). He co-authored a book entitled GSM, cdmaOne and 3G Systems (Wiley, 2001)and contributed a chapter to Understanding UMTS Radio Network Modelling, Planning and Automated Optimisation: Theory and Practice (Wiley, 2006). He has also presented a number of UMTS papers at international conferences.

14.

More recently, Mr Gould has worked on the interference impact of LTE technology and wireless sensors. Mr Gould is a member of techUK’s SmarterUK Transport steering board and the SmarterUK Advisory Council, and a fellow of the Institution of Engineering and Technology (IET). He has also acted as an expert evaluator for the European Commission in the field of Information Society Technologies.

15.

Counsel for Philips pointed out that, unlike Mr Edwards, Mr Gould had not attended any 3GPP meetings and had had no experience in developing UMTS Release 99 or Release 4 products as at the Priority Date. Furthermore, again unlike Mr Edwards, Mr Gould had not read all the relevant 3GPP contributions in 2001 to remind himself about the state of the art at the Priority Date before preparing his first report, although he did so before preparing his second report. I accept that the first of these points means that Mr Edwards was slightly better qualified as an expert than Mr Gould, but not substantially. The main relevance of the second point is that it lends support to counsel for Philips’ submission discussed in the following paragraph.

16.

Counsel for Philips submitted that Mr Gould had fallen into the trap of relying on hindsight. As Mr Gould accepted, he copied one sentence of his discussion of the common general knowledge in his first report from the Patent without acknowledgement. It is therefore clear that Mr Gould had a copy of the Patent in front of him when writing this section. Whether for that reason or not, he also said that it was common general knowledge that there were three variables that could be altered to change the error probabilities in a binary antipodal signalling scheme, namely the voltage of the decision threshold, the voltage of the first signal and the voltage of the second signal, and hence increasing the power of either signal would decrease the error probability for that signal. As he accepted, however, none of the books he relied on disclosed binary antipodal signalling with different powers for each signal and the only pre-Priority Date document disclosing this in evidence is Shad. Counsel for Philips submitted, and I agree, that this is likely to have affected Mr Gould’s opinion as to obviousness over Motorola 021.

17.

Counsel for Philips also submitted that parts of Mr Gould’s evidence were inconsistent, incoherent and illogical. This is a submission about the cogency of his opinions, which is again a separate question.

18.

Finally, counsel for Philips submitted that at times Mr Gould had lost sight of his role and had become an advocate for the Defendants’ case. I do not accept this. Mr Gould appeared to me to be doing his best to assist the court, and did make certain concessions. As will appear, I found his evidence with respect to Motorola 021 unconvincing, but I think that was because he was influenced by hindsight rather than because he was arguing the Defendants’ case.

Technical background

19.The following account of the technical background is mainly based on the primer agreed by the parties, which I have supplemented from the expert evidence.

Mobile telecommunication standards

20.

There are a number of standards for mobile telecommunication systems in operation in different countries. There have been a series of generations of standards, including the second generation (2G), third generation (3G) and fourth generation (4G). Each standard is periodically revised to introduce improvements and new features. New versions are typically called “Releases”.

21.

Global System for Mobile Communications (GSM) is a 2G system developed by ETSI based on time division multiple access (TDMA) and frequency division multiple access (FDMA) technology. The first version of the GSM standard was released in the late 1980s. By the Priority Date GSM had been commercially launched in many countries around the world, including the UK and throughout Europe.

22.

UMTS is an example of a 3G system. Work on developing the UMTS standard was begun by ETSI in the mid-1990s and then continued by the 3rd Generation Partnership Project (3GPP).

23.

The first full UMTS release, Release 99, was, despite the name, released in March 2000. By the Priority Date Release 4 had been released and work was underway on Release 5, but Release 5 had not been finalised and product development had not started. The first commercial launch of UMTS (Release 99) was in Japan on 1 October 2001. The

UMTS standard had not been put into use commercially in the UK, or elsewhere in Europe, prior to the Priority Date.

24.

IS-95 (later known as cdmaOne) is a 2G system developed primarily by Qualcomm based on code division multiple access (CDMA) technology. The first version of the IS-95 standard was released in the mid-1990s. By the Priority Date IS-95 had been commercially launched in many countries around the world, including in South Korea and the US, but not in the UK or elsewhere in Europe.

25.

cdma2000 resulted from work on the evolution of IS-95 towards the third generation and was standardised by the 3rd Generation Partnership Project 2 (3GPP2). The standard had been released prior to the Priority Date and had also been put into use commercially by this time in South Korea where it was launched in 2000. But it had not been put into use elsewhere, including the UK and Europe, by the Priority Date.

26.

Prior to the Priority Date, 3GPP and 3GPP2 had been working independently on the standardisation of high speed data mobile systems.

Standard setting

27.

The purpose of producing standards is to ensure that different items of equipment from different vendors will operate together. For example, a Mobile Station (MS) produced by one manufacturer must be able to work correctly with a Base Station (BS) and other network equipment from other manufacturers. From the consumer’s and the network operator’s perspectives, therefore, the whole system should work together seamlessly.

28.

3GPP was formed in 1998 to work on developing the UMTS standard. 3GPP is an international standardisation project which includes standard-setting organisations from around the world, for example the American National Standards Institute (ANSI) and the Chinese Wireless Telecommunication Standard (CWTS) as well as ETSI.

29.

In the period 2001-2004 3GPP was divided into a number of technical specification groups (TSGs) which were responsible for different aspects of the system:

i) Radio Access Network (TSG-RAN); ii) Core Network (TSG-CN); iii) Service and System Aspects (TSG-SA);

iv) Terminals (TSG-T).

30.

For present purposes, the Radio Access Network technical specification group (TSG RAN) is the most relevant group in 3GPP. TSG RAN in the period 2001-2004 was divided into different working groups, covering various matters related to the operation of base station equipment and mobiles. For example Working Group 1 (RAN WG1) was responsible for the specification of the physical characteristics of the radio interface. RAN WG2 was responsible for the Radio Interface architecture and protocols (MAC, RLC), the specification of the Radio Resource Control protocol, the strategies of Radio Resource Management and the services provided by the physical layer to the upper layers (see further below).

31.

Each working group held meetings bringing together delegates from many different stakeholders (predominantly large mobile handset, base station, or semiconductor manufacturers but also network operators) to propose and discuss contributions to the standard with a view to reaching agreement on what should be incorporated in the version of the standard being worked on.

32.

At technical meetings and plenary meetings, the stakeholders would present temporary documents (T-docs) which might then form parts of Technical Reports (TRs) or be drawn together into Technical Specification (TS) documents.

Elements of a mobile telecommunications system

33.

Figure 1 below shows the main components of a typical mobile telecommunications network in the 1990s and 2000s at a general level.

34.

Mobility is achieved within the network by facilitating “handover” of an MS between different cells (in this context a cell is a geographic area corresponding to the radio coverage of a BS transceiver) located within the RAN as the MS moves around with its user.

35.

The RAN consists of BSs and controllers. A BS is a node of (or point in) the network which provides a number of functions. It sends and receives radio transmissions to and from MSs that are within the cell covered by that BS.

36.

MSs are also known as User Equipment (UE) in UMTS. A BS can also be denoted BTS in GSM or Node Bin UMTS.

37.

The cells of a network are shown schematically below in Figure 2. A BS is found at the centre of each cell. In reality, however, the cells are of a very irregular shape and will have areas of overlap.

38.

The BSs are connected to a controlling unit (the “Controller” in Figure 1). In GSM this is known as a Base Station Controller (BSC). In UMTS the controller is called a Radio Network Controller (RNC). One of the many functions of the controller is to facilitate handover of a MS between different BSs.

39.

As indicated in Figure 1, the Core Network (CN) may interface with other networks such as the public telephone network and other mobile networks.

OSI seven layer model

40.

The OSI (Open System Interconnection) model is a common way of describing different conceptual parts of communication networks.

41.

The OSI model has seven layers. From top to bottom, these are as follows:

i)

Layer 7, the Application Layer, which provides services to the user software applications (e.g. email delivery protocols and Hypertext Transfer Protocol (http));

ii)

Layer 6, the Presentation Layer, performs translation and formatting of information received (which may include the functions of compression/decompression and/or encryption/decryption) to present to the application layer and provides an interface to the Session Layer;

iii)

Layer 5, the Session Layer, which handles communications at a call level, initiating and terminating the communication between users;

iv)

Layer 4, the Transport Layer, which provides communication of data between end users. End to end (i.e. terminal to terminal) error control forms part of this layer;

v)

Layer 3, the Network Layer, which provides routing from where the data enters a network to where it leaves it;

vi)

Layer 2, the Data Link Layer, which provides communication over an individual link within the network. Error control for the link is included in this layer; and

vii)

Layer 1, the Physical Layer, which is concerned with the transmission of the data over the physical medium itself (i.e. protocols that specify how radio waves sent through the air represent data).

42.

The seven layers are shown on both sides of Figure 3 under the images of the MSs and the horizontal arrows reflect the effective links between them (described as logical connections). The curved line shows how the data actually flows down through the layers to provide the required connectivity. It can be seen that the data flows from the Application Layer in one MS down to the Physical Layer where it can be transmitted (over the radio interface) to the Physical Layer of a router element (for example a RNC). The data flows up from the Physical Layer of the RNC to the Network layer where it can be passed to the Network Layer of another RNC and back down to the Physical Layer. Finally, having been transmitted from the Physical Layer of the RNC to the Physical Layer of a second MS, the data flows back up to the Application Layer.

43.

One of the functions in the Data Link Layer is the Medium Access Control (MAC), whose functions include such matters as mapping between logical and transport channels and scheduling.

Channels

44.

To facilitate the specification of mobile telecommunications systems, it is common practice to identify a number of types of “channels” with different roles.

45.

For present purposes, the “physical channels” used to carry information over the radio interface between the MS and the BS are of particular interest. These channels are associated with the Physical Layer (see Figure 3).

46.

Downlink (or forward) physical channels provide communication from the BS to the MS, whereas uplink (or reverse) physical channels provide communication from the MS to the BS.

47.

Physical channels may provide a communication path that is dedicated to an individual MS, or provide communication between a BS and multiple MSs. For example, broadcast physical channels provide communication from a BS to all of the MSs within its coverage area.

48.

Physical control channels carry control signals, used for the purposes of maintaining the operation of the system, whereas physical data channels carry user services (such as a voice call or data communication) and may include higher layer control signalling that is not related to the physical layer itself.

49.

In some cases, mobile system specifications define other types of channel, which make use of the physical channels. For example, in the UMTS system, the physical layer provides a set of “transport channels” to the MAC layer above it. The MAC layer, in turn, provides a set of “logical channels” to the RLC layer above it. The UMTS logical channels are defined by the type of information they carry.

50.

Typically, a mobile system specification defines which physical channels are used to carry each type of higher layer channel. For example, Figure 4(taken from Holma and Toskala, WCDMA for UMTS, 2000) illustrates the mapping of transport channels to physical channels in the UMTS system in 2000.

51.

Other systems, such as GSM and cdma2000, have their own definitions and mappings of physical and other types of channels, based on similar principles.

Duplexing schemes

52.

Duplexing is the process of achieving two-way communications in a system. The two main forms of duplex scheme that are used in cellular communication are Time Division Duplex (TDD) and Frequency Division Duplex (FDD).

53.

In TDD bi-directional communication takes place on a single radio frequency channel. The system avoids collisions between uplink and downlink transmissions by transmitting and receiving at different times, i.e. the BS and the MS take it in turns to use the channel.

54.

In FDD two (generally symmetrical) segments of spectrum are allocated for the uplink and downlink channels. In this way the base station and mobile station transmit simultaneously, but at different radio frequencies, thereby eliminating the need for either to transmit and receive at the same frequency at the same time. One consequence of the uplink and downlink transmissions being carried at different frequencies is that the attenuation experienced by each signal could be significantly different as the fast fading (as to which, see below) may differ on the uplink and downlink transmissions. In TDD systems, the fading is likely to be similar on the uplink and the downlink as they generally occur on the same frequency.

Multiple access schemes

55.

In any cellular network it is necessary to have a mechanism whereby individual users can be allocated a portion of the radio resources so that they can communicate with the BS using their MS for the duration of a communication. This mechanism is referred to as a “multiple access scheme”. Three of the most common multiple access schemes are

TDMA, FDMA and CDMA.

56.

CDMA is of most relevance to this case. In CDMA, several users are permitted to send information simultaneously over a single radio frequency channel. The transmissions of the different MSs are separated from each other through the use of codes. CDMA employs spread spectrum technology and a special coding scheme known as Code Division Multiplexing (CDM), where the BS assigns each MS one or more unique code(s) within one cell. UMTS employs a version of CDMA called Wideband CDMA (WCDMA).

Functions of the radio transmission chain

57.

Figure 5shows the basic components of a radio link, or “transmission chain”, in UMTS.

58.

Following the arrows in the diagram from the top left: as a first stage, data is taken from the Application Layer (the layer providing a service to the end user of the system) and “source encoded” into an efficient representation for use in the next stages of transmission. For example, source coding may involve an analogue audio speech signal being encoded into a digital signal and compressed.

59.

After source coding, the data is channel coded. Channel coding adds symbols to the data to be transmitted in a particular pattern that allows corruption to be detected and corrected. This is particularly important for a radio link since, unlike wired transmission, it is likely that some amount of corruption will occur during wireless transmission.

60.

The data is then grouped into packets(“packetized”) and multiplexed to allow more efficient use of resources. Multiplexingincludes combining data from different services for an individual user as well as combining data from other users. The data stream is then modulated and converted to radio frequency (RF) for transmission.

61.

The final step before transmission is to amplify the signal. The amplification is usually variable so that only so much power is used as is needed to reach the receiver.

62.

The receiving system performs the same steps outlined above, but in reverse order. Detection of the received signal is more complicated than modulating the transmitted signal because the receiver has to cope with noise, interference and multipath propagation (discussed below).

Repetition coding and channel coding

63.

Repetition coding and channel coding are two ways to protect a transmission system against errors introduced by the transmission medium, both of which introduce redundancy. A simple form of repetition coding is to create a codeword in which the same information is repeated multiple times. Repetition coding reduces the rate of transmission of information, but enhances the probability of detection.

64.

Channel coding works by encoding additional symbols. These additional symbols are added to the transmitted data in such a way that if the data symbols are corrupted during transmission this can be recognised and errors in the data can potentially be corrected. A simple example of a channel code is where a single bitis added to the end of binary words to make the number of binary 1s in the word even (i.e. if the number of 1s in the original word was even, the additional bit would be 0, but if it were odd, the additional bit would be 1 to make the overall number of 1s even). This is called a Single Parity

Check (SPC) code. If any single bit is corrupted (i.e. 0 becomes a 1 or a 1 becomes a 0), the SPC code will detect the error as the number of 1s in the resulting word will be odd.

65.

A Cyclic Redundancy Check (CRC) code adds redundant bits to a packet of user data based on the remainder of a polynomial division. In some contexts, CRC bits are referred to as a Frame Check Sequence (FCS). When the receiver gets a packet of data – the frame – it calculates the same CRC, and compares the result to the contents of the received FCS. If the calculated value is different from the one that was sent, it can be concluded that some alteration has been made to the message between the time the two functions were calculated, i.e. between the transmitter and the receiver. An error will have occurred. This is schematically illustrated in Figure 6.

66.

If larger numbers of additional symbols are used, the location of an error can be identified as well as the fact that an error occurred. In this case the channel code may be used to correct errors, as well as detect them. This is known as Forward Error Correction (FEC).

Error control strategies

67.

Three broad methods of error control were specified in UMTS in 2001: error concealment, FEC and Automatic Repeat reQuest (ARQ).

68.

Error concealment. With error concealment, errors are detected and the corrupted information identified so that it can be discarded. The remaining information is available and in some cases can be used to mask the missing corrupt data. This can be done by repeating a previous sample, muting the corrupt sample, or by trying to interpolate from the surrounding values. This is shown in Figure 7. Error concealment works well in speech and audio systems as long as the error rate is low. It can also be used for images in some cases, but not for general data, such as emails or file transfer.

69.

FEC. As mentioned above, FECadds redundancy to transmitted data in such a way that it is possible for the receiver to reconstruct the original data in the event that part of it was corrupted. This is shown in Figure 8.

70.

FEC involves more computation and requires an additional transmission overhead compared with error concealment, but has the benefit of correcting errors without requiring a feedback path to the transmitter. Also, little additional delay is introduced over that involved in transmitting the additional symbols.

71.

ARQ. With ARQ, errors are detected and in this case the transmitter is informed. In the case of uncorrectable errors, signalling on a return channel is used to request retransmission of the data. One approach for ARQ is to send back an acknowledgement or “ACK”message (to say that a packet was correctly received) or a negative acknowledgement or “NACK”message (to say that a packet was received with an uncorrectable error). Other approaches may send only one type of message (e.g. ACK), with the absence of a message being interpreted as the alternative (e.g. NACK). This is shown in Figure 9.

72.

There are several ways in which ARQ methods may be implemented. One of these is known as Stop And Wait (SAW). In one implementation of SAW, the transmitter will

stop transmitting after one packet has been sent and wait for a response. During this process, it will hold on to the packet it transmitted most recently so that it can easily be re-sent if the response (or absence of response) indicates that the packet was not received.

73.

Although the SAW method is simple, the transmitter has to keep waiting for an ACK or NACK. Efficiency can be improved by continuing to transmit new packets up to a given limit without waiting for any received ACKs. The number of outstanding unacknowledged packets is known as the “window” and is fixed to some maximum value. It is therefore also described as a “sliding window” method (in fact SAW can be thought of as a sliding window protocol with a window size of 1).

74.

There are then two options available; either the transmitter transmits only the packet that was in error (Selective Retransmission or SR) or it also retransmits all the frames that had been transmitted after the lost packet (“go back N” retransmission). In SR, the transmitter resends only the packet that resulted in the NACK response. This is only possible if the receiver is able to receive further packets out of sequence and then reorder the packets later on. Additionally, it requires the acknowledgment message to specify which packet was not correctly received.

75.

The go back N system is simpler, because the receiver does not need to reorder packets. It is also unnecessary to specify the individual packet which was not correctly received so long as it identifies that packets up to a particular number have been correctly received.

76.

Comparison of FEC and ARQ. FEC works very effectively when there is a known, and invariant, level of noise. The amount of redundancy can then be adjusted to correct the expected number of errors. ARQ works well when errors occur infrequently. Packets can be sent with much less redundancy (since errors only need to be detected not corrected), and when errors do occur the packet is retransmitted. If errors occur frequently, however, then the overhead from retransmission may exceed the overhead from FEC.

77.

Most wireless systems use a combination of FEC and ARQ. FEC provides an efficient way of correcting errors caused by the general level of noise and interference, while ARQ can pick up occasional errors caused by a significant degradation of the radio channel. This allows the FEC system to be designed for the average amount of noise rather than the worst-case scenario. Typically, each data packet would be protected with an FCS at one protocol layer, before being passed on to a lower layer for FEC. At the receiver, the FEC would be decoded and an attempt made to correct any errors before checking the FCS. If the FCS check failed the receiver would request a retransmission.

78.

HARQ. A variation of ARQ is Hybrid Automatic Repeat reQuest (HARQ). HARQ schemes combine FEC and ARQ. In a HARQ scheme, if the data cannot be recovered successfully by the receiving entity from the received signal (i.e. there are too many errors to be corrected using FEC), a NACK signal is sent to the sending entity. The retransmitted signal can be combined by the receiving entity with the signal received in previous, unsuccessful transmissions, and another attempt can be made to recover the transmitted data. If the data can be recovered, it is sent to the next stage in the receiver chain for processing. If it cannot be recovered, a further retransmission can be requested using a NACK response and this signal is then combined with the information received in the previous transmissions to recover the transmitted data. This process will proceed until the maximum number of retransmissions has been reached and, if the data still cannot be successfully recovered, the data will not be passed on to the next stage in the receiver chain.

Noise and interference

79.

Both noise and interference can affect and limit wireless communications. It is therefore important for the levels of noise and interference to be measured in order to determine the optimum power for radio transmissions.

80.

The Signal-to-Noise Ratio (SNR) is a measurement which compares the level of a wanted signal to the level of background thermal noise. Thermal noise is approximately white, meaning that its power spectral density is uniform throughout the frequency spectrum. The amplitude of the random white noise is commonly modelled as a Gaussian probability density function, often described as Additive White Gaussian Noise (AWGN).

81.

A related measurement is the Signal-to-Interference Ratio (SIR). Although the terms SNR and SIR are often used interchangeably, noise and interference are not identical phenomena. Interference is any unwanted radio frequency signals that arrive at the receiving antenna from other intended (e.g. BS or MS) or unintended (e.g. electronic equipment, vehicle engines) transmitters.

82.

In the context of BPSK (as to which, see below), the SNR may be expressed as the energy per bit/noise spectral density, Eb/N0.

83.

The Bit Error Rate (BER) quantifies the rate at which errors occur. The BER depends among other things on the SNR. Broadly speaking, the higher the SNR, the lower the error rate, and vice-versa.

Signal transmission and detection

84.

Information is transmitted on a radio signal by altering its amplitude, frequency or phase, or a combination of these, based on the information to be conveyed, in a process known as modulation. The information is recovered from the radio signal by detecting these changes in the received signal's characteristics in a process known as demodulation.

85.

In Binary Phase Shift Keying (BPSK) information is transmitted in the phase of a signal. The phase of the sinusoidal radio signal is set to 0° when a 1 is transmitted and is “shifted” to 180° when a 0 is transmitted. These signals are said to be “antipodal”. The amplitude of the two signals is equal. This is shown in vector form in Figure 12.

86.

The detector at the receiver uses a local reference signal that has the same phase characteristics as the reference used to generate the transmitted signal at the transmitter, i.e. the two are coherent. The coherent detector recovers the in-phase voltage component of the received signal. If the recovered voltage is positive, this is interpreted as a 1; and if it is negative, this is interpreted as a 0.

87.

Modelling the effect of noise. Noise can lead to errors in the recovery of the information from the received signal. If the recovered voltage level is recorded for a large number of transmitted data symbols, these recorded voltage levels will be distributed around the ideal (noise-free) voltage level. This distribution is usually modelled as a Gaussian probability density function. Figure 13 shows a Gaussian distribution for each of the 0 and 1 signals in a BPSK detector.

88.

Probability of error. The receiver must decide which signal was transmitted based on the received signal, and this is achieved by applying a so-called “decision threshold”. Referring to Figure 13, if the received voltage is greater than 0 the transmitted signal is assumed to be 1, whereas if the received voltageis less than 0 it is assumed to be 0. In this case the decision threshold is 0 and this lies halfway between the two signal points.

89.

There is a chance that the noise could cause the recovered voltage for a given transmitted signal to fall on the “wrong” side of the decision boundary, and in that case an error will occur in the received data. This is illustrated in Figure 14,where the shaded area shows the region where the recovered voltage is greater than zero when a 0 is transmitted, and hence the receiver will interpret this as a 1. The probability of this type of error occurring can be determined by calculating the area of the shaded region using integration techniques.

90.

There is a trade-off, associated with setting the detection threshold, between detecting genuine signals and mistaking noise for a signal: the more sensitive the detector, the more genuine signals it will detect, but also the more noise it will pick up (leading to

the risk of a signal being detected when in reality it is absent i.e. a false positive); the less sensitive the detector, the fewer genuine signals it will detect (leading to the risk of a signal not being detected when in reality it is present i.e. a false negative), but also less noise. Thus, when the detection threshold is increased, the probability of a false positive and the probability of detection decrease. When the detection threshold is decreased, the probability of a false positive and the probability of detection increase.

Power control

91.

Power control is a fundamental radio resource management feature of mobile telecommunication systems which affects the quality of service experienced by individual users and the overall capacity of the system.

92.

As an MS moves around a network, its radio environment changes because of its distance from base stations, obstructions to the radio signals, and reflection, refraction and diffraction caused by surrounding objects leading to multiple propagation paths.

93.

Power control is relevant to both the downlink and the uplink of mobile systems, although the specific requirements may depend on the nature of each system.

94.

In the context of CDMA systems, the power control mechanism must be able to respond to slow fading and fast fading of the radio signals, as explained below, and also address the so-called “near-far” problem on the uplink/reverse link.

95.

Near-far problem. The “near-far” problem is illustrated in Figure 15.

96.

Figure 15 depicts two mobile stations, MS1 and MS2, which are at different distances from the BS. In a CDMA system, the signals of the two mobile stations are sent at the same time and on the same frequency, and are distinguished by means of different codes.

97.

Because MS1 is at a larger distance from the base station than MS2, the signal of MS1 is likely to suffer a greater loss of power on its way to the base station than the signal of MS2 (although other aspects of the radio transmission, such as buildings and multipath propagation, will also have a bearing). If MS1 and MS2 transmitted the signals at the same power level, the MS1 signal would be much weaker at the base station than the MS2 signal (Received Power RP1 < RP2). There is a risk that the MS2 signal might cause excessive interference, and thus prevent reception of the signal from the more distant MS1 by the BS.

98.

Because of this, uplink power control in CDMA systems is designed to ensure the BS receives equivalent power levels from all MSs within its coverage area. Hence, for example, MS1 would generally transmit at a higher power level than MS2 to deliver the same service.

99.

Slow fading. As an MS moves further from a BS, or behind an obstruction such as a building or a hill, its signal gradually becomes weaker, for example over a period of seconds. As the MS moves closer to the BS, or emerges from the shadow of the obstruction, the signal recovers. This effect is referred to as slow fading.

100.

Multipath (fast) fading. Like other forms of electromagnetic radiation, the radio frequencies utilised in mobile telecommunications are reflected, refracted and diffracted by interactions with the surrounding environment, such as buildings, street furniture and trees. Radio signals therefore do not follow one straight path between BS and MS, instead many paths can be taken and the signal received at the MS or BS will be a composite of all the various paths taken. This is known as multipath propagation.

101.

Depending on the path taken by the radio signal, it will be attenuated (i.e. reduced in strength) and phase shifted (i.e. re-aligned with respect to time) by different amounts. The composite signal received will likewise vary in accordance with the signals received from each individual path. For example, if there is a great deal of subtractive interference or cancellation (due to phase shifting) in the composite signal, it will be received at a reduced power in comparison to a signal that has travelled over a direct path.

102.

Additionally, since the MS and the environment around it do not remain static during operation (for example, vehicle movements may affect the path taken by the radio signal), the multipath phenomenon is also dynamic. As a result, the composite signal can rapidly change in power, for example over a period of milliseconds as illustrated in Figure 16 (taken from Holma and Toskala). This effect is known as multipath fading or fast fading. It can be modelled by a statistical model known as Rayleigh fading.

103.

Mobile systems based on CDMA have particularly stringent requirements for power control, because users share the same spectrum at the same time and are differentiated only in the code domain. The power transmitted by any device in such a system may create interference for nearby devices operating in the same radio channel.

104.

Power control techniques. There are two general approaches to the dynamic control of power levels in a mobile system, referred to as “open-loop” and “closed-loop” control. Most mobile systems apply both methods.

105.

Open-loop power controlrequires no feedback from the receiver to the transmitter. The transmitter of a signal (either MS or BS) makes an estimate of the radio propagation conditions, typically based on an estimate of path loss between the MS and BS, and sets its transmission power accordingly (the higher the received signal power, the lower the MS transmitter power set and vice-versa). This approach relies on the propagation conditions being similar in both directions (uplink and downlink). Given that the geographic distance is the same in each direction, it is a reasonable starting point, although the conditions may be quite different if the uplink and downlink operate on different frequencies. Open-loop power control is often used at the start of a connection, when it is not possible to apply closed-loop techniques.

106.

Closed-loop power controluses a feedback loop from the signal receiver to the signal transmitter to control the transmitted power level. For example, on the uplink the BS receiving the MS signal may feedback “power up” or “power down” commands to control the MS transmitter, according to whether the received power level is too low or too high, respectively. Closed-loop power control allows the system to accommodate situations where the signal propagation conditions are different for the uplink and downlink of a system.

107.

As part of closed-loop power control it is common to use a combination of “outer-loop” and “inner-loop” control.

108.

Outer-loop power controlis a mechanism to set a target for inner-loop power control. Typically, the quality of service of a received signal is assessed in some way, for example by determining the error rate of the decoded data. If the error rate of the received data is too high or too low, then the system increases or decreases, respectively, the target power level or SNR for inner-loop power control.

109.

Inner-loop power controlaims to achieve a defined target level for some parameter of a received signal, such as its SNR. Typically, these are parameters that can be determined quickly, so that power control commands can be returned to the transmitter promptly, to deal with fast fading.

UMTS Release 4

110.

At the Priority Date, the most recent finalised version of the UMTS standard was Release 4. Release 4 included the following features.

111.

A number of different communication channels were used within the UMTS system, including “dedicated channels” and “shared channels”. A dedicated channel is a channel between a BS and a specific MS. A shared channel may be used by more than one MS.

112.

There were two types of uplink dedicated physical channels, the uplink Dedicated Physical Data Channel (uplink DPDCH) and the uplink Dedicated Physical Control Channel (uplink DPCCH).

113.

The DPDCH was a dedicated channel that transferred user data, such as speech or video data. This channel was used to carry the DCH transport channel. There could be zero, one, or several uplink DPDCHs on each radio link.

114.

The uplink DPCCH was a dedicated channel that carried control information, namely pilot bits (these carry information required to facilitate channel estimation for coherent demodulation), Transport Format Combination Indicator (TFCI) bits, Feedback Information (FBI) bits and Transmit Power Control (TPC) bits. There was one, and only one, uplink DPCCH on each radio link.

115.

UMTS Release 4 used outer-loop and inner-loop power control for these uplink channels. The target set in the outer loop was a measure of SNR. The inner loop was operated by a stepwise adjustment of the uplink channels in response to power control commands received from the BS (essentially “up” or “down”). All of the uplink DPDCHs and DPCCHs moved up or down by the required step-size on receipt of each power control command.

116.

A power difference was applied between DPDCH (physical user data channels) and DPCCH (physical control signalling) using gain factors. The gain factor βc was applied to the DPCCH and the gain factor βd was applied to the DPDCH. This power difference depended upon the Transport Format Combination (TFC), namely the combination of currently valid transport formats.

117.

There was only one type of downlink dedicated physical channel, being the downlink Dedicated Physical Channel (downlink DPCH).

118.

UMTS Release 4 specified the use of FEC and ARQ in the physical and RLC layers respectively.

119.

In UMTS Release 4 there were, broadly speaking, two types of handover of an MS from one BS and to another BS: hard handover and soft handover. In hard handover, the MS is handed from BS1 to BS2 so that it is only ever connected to one of them. Thus the connection between the MS and BS1 is broken before the connection between the MS and BS2 is made. In soft handover, the connection between the MS and BS2 is made before the connection between the MS and BS1 is broken. Soft handover has the advantage that, in the uplink, the MS may be able to transmit at lower power.

Development of HSDPA

120.

At the Priority Date, High-Speed Downlink Packet Access (HSDPA) was under development as part of UMTS Release 5. HSDPA as it then stood was described in TS 25.308 v5.0.0 (as to which, see further below). HSDPA was envisaged to have two key features: (i) adaptive modulation and coding and (ii) HARQ.

121.

Adaptive modulation and coding is where the modulation scheme and/or FEC coding scheme can be changed to make best use of the prevailing channel conditions. The use of adaptive modulation and coding with fast scheduling lends itself to data delivery that is “bursty” rather than continuous. It was envisaged that there would be periods when the MS would not receive high speed data and would therefore not need to acknowledge any packets. A period where no packets are sent is referred to as discontinuous transmission (DTX).

122.

A HARQ protocol was to be used based on a so-called Multi-Channel SAW scheme, which was going to be asynchronous in the downlink and synchronous in the uplink. In Multi-Channel SAW with a synchronous uplink, the BS knows when it has transmitted data on the downlink to an MS and when to expect an ACK or NACK signal on the uplink from that MS.

123.

A key difference between the HARQ scheme proposed for HSDPA and ARQ (available in UMTS Release 99/Release 4) was that HARQ was to be dealt with at a lower layer of the protocol stack than ARQ. In contrast to the ARQ scheme in UMTS which was in the higher RLC layer, it was proposed in HSDPA to employ a HARQ process in the physical and MAC layers of the BS and the MS. This meant that retransmissions could be requested and sent very quickly because it did not require any interaction with higher layers in the protocol stack or other functional components beyond the BS and MS.

124.

HSDPA introduced a new shared downlink transport channel, the HS-DSCH, on which packet data would be sent. HSDPA also introduced an additional uplink dedicated control channel for HSDPA related uplink signalling.

125.

The HSDPA downlink physical layer model for FDD is shown from the perspective of the MS in Figure 2 of TS 25.308 reproduced below. The associated Dedicated Physical Channel (DPCH), which uses the legacy UMTS downlink channel structure, is shown on the left and the new HS-DSCH is shown on the right. Whereas the MS communicates

with a number of cells on the DPCH channels, it only communicates with a single cell on the new HS-DSCH channel.

DCH HS-DSCH model model

Coded Composite

Transport Channel

(CCTrCH)

Physical Channel

Data Streams

Phy CH Phy CH Cell 1

126.

The new shared HS-DSCH channel could be used flexibly, such that an MS could be assigned multiple channelization codes in the same Transmission Time Interval (TTI) or multiple MSs could be multiplexed in the code domain within a TTI. The TTI for the HS-DSCH was defined to have a fixed size of 2 ms. The high speed data was delivered in blocks with redundancy in the form of a CRC with a fixed size of 24 bits. This allowed the receiver at the MS to determine if the data had been received correctly, which was the starting point for the MS to ACK or NACK the data by means of the HARQ protocol. A New Data Indicator (NDI) was provided with each data block, and incremented for each new block so that the receiver in the MS could distinguish between new data and retransmitted data.

The problem to which the Patent is addressed

127.An issue with ARQ and HARQ systems that was recognised in the field at the Priority Date is that the consequences of errors in the detection of ACK and NACK signals can be significantly different. As explained above, in an ordinary case the transmitter would retransmit a packet if a NACK were detected. If the transmitter detects a NACK when an ACK was sent (a “false NACK”), then the packet is retransmitted anyway, which only wastes a little system resource. On the other hand, if a NACK is sent, but detected as an ACK (a “false ACK”), no retransmission is made. This situation can only be recovered from by using higher layer processes, which adds delay to the overall data transmission, and can result in the retransmission of a larger portion of the data, which would represent a significant waste of system resources. Therefore, the cost of a false ACK is generally much more significant than the cost of a false NACK, particularly where error-free data transmission is required. For this reason, it was recognised that it was desirable to control the relative probabilities of errors in decoding ACKs and NACKs at the BS.

The Patent

Technical field

128.The specification explains at [0001] that the invention is particularly relevant to UMTS, although it is applicable to other mobile radio systems.

Background art

129.

The specification explains at [0002] that in UMTS an HSDPA scheme is being developed which may facilitate transfer of packet data to an MS at up to 4 Mbps.

130.

At [0003] the specification notes that a conventional component of a packet data transmission system is an ARQ process. In this context it states:

“Since packet transmission is typically intermittent, discontinuous transmission (DTX) is normally employed so that nothing is transmitted by the MS unless a data packet has been received.”

131.

The specification goes on:

“[0004] A problem with such an ARQ scheme is that the consequences of errors in the ACK and NACK are significantly different. Normally the BS would re-transmit a packet if a NACK were received. If the BS receives a NACK when a ACK was sent, then the packet is re-transmitted anyway, which only wastes a little system resource. If a NACK is sent, but received as a ACK, then no re-transmission is made. Without special physical layer mechanisms, this situation can only be recovered from by using higher layer processes, which adds delay and is a significant waste of system resources. Hence, the cost of an error in a NACK is much more serious than the cost of an error in a ACK.

[0005] In order to optimise system performance, it is desirable to control the relative probabilities of errors in decoding ACKs and NACKs. In one UMTS embodiment this is done by setting different detection thresholds at the BS, which requires the MS to transmit the ACK/NACK codeword with a specific power level (e.g. relative to uplink pilot power). This power level and the detection threshold can therefore be chosen to balance costs of ACK/NACK errors, interference generated by the MS, and battery power used by the MS. With DTX, the situation is a little more complex. However, the BS, as the source of the packet, is aware of when a ACK/NACK should be sent by the MS and it should therefore not normally be necessary to specifically detect the DTX state.”

132.

As is common ground, the problem referred to at [0004] is the known problem described above. The second and third sentences of [0005] refer to a previous proposal

to solve this problem in UMTS, which the skilled person would also have been aware of.

133.

At [0006] the specification refers to a co-pending application which discloses a physical layer mechanism for recovering from the case where the BS misinterprets a NACK as an ACK which makes use of an additional codeword, REVERT. (This proposal was not incorporated into the Standard.)

Disclosure of the invention

134.

The specification sets out at [0010]-[0015] five aspects of the invention which describe the invention from the perspective of the “system” (the first aspect), the “primary station” (i.e. BS) (the second and third aspects), the “secondary station” (i.e. MS) (the fourth aspect) and the method in general (the fifth aspect).

135.

The invention is summarised at [0011] as follows:

“By transmitting different acknowledgement signals at different power levels, the probability of the primary station correctly interpreting signals of different types can be manipulated to improve total system throughput and capacity. In one embodiment negative acknowledgements are transmitted at a higher power level than positive acknowledgements to increase the probability of the primary station retransmitting a data packet when necessary. In another embodiment an additional revert signal type is provided, which requests the primary station to retransmit a data packet initially transmitted prior to the current data packet and which was not correctly received. The revert signal may be identical to the negative acknowledgement signal but transmitted at a higher power level.”

136.

It is important to note that the second embodiment referred to, the use of a additional revert signal, is not the subject of the claims of the Patent that are alleged to be infringed. Counsel for the Defendants made the forensic point that the feature of BS control of the power levels of the acknowledgement was not highlighted as having any particular inventive significance, but nevertheless this feature is part of the invention as claimed.

Modes for carrying out the invention

137. The specification returns to the false ACK problem at [0022]:

“As discussed briefly above, the consequences of errors in acknowledgements 204,206 received by the BS 100 are different. If an ACK 206 is received as a NACK 204, the respective packet 202 is retransmitted but the MS 110 can recognise this situation by the sequence number. However, if a NACK 204 is received as an ACK 206, the BS 100 continues with transmission of the next packet 202. The MS 110 can determine that this has happened, from the sequence number of the received packet 202. However, it cannot request the BS 100 to retransmit the packet 202 received in error without invoking higher layer procedures, thereby wasting significant resources.” 138.The core of the teaching is contained in the following passage:

“[0023] It is likely for most applications that DTX would be applied for most of the time, given the typically intermittent nature of packet data transmission. In addition, for a well configured system, NACKs 204 should be sent significantly less often than ACKs 206. Hence, in a system made in accordance with the present invention a NACK 204 is transmitted at a higher power level than an ACK 206. This power offset is advantageous because it reduces the error probability for the NACK 204 without increasing the power transmitted for the ACK 206. It is particularly advantageous if the probability of a MS 110 missing a packet is very small, so there is no need to consider optimum setting of BS detection thresholds to differentiate NACK from DTX. Hence, any given error performance targets could be achieved with minimum average power transmitted by the MS 110.

[0024] It will be recognised that if a MS 110 is transmitting more NACKs 204 than ACKs 206, this proposed strategy would result in an increase in average uplink interference rather than the desired decrease. Therefore, in one embodiment of the present invention, the MS 110 is forbidden from applying the power offset unless it has previously positively acknowledged more than a certain proportion of packets (e.g. 50%). This prevents the power offset from causing an undue increase in uplink interference in poor downlink channel conditions.”

[0025] In another embodiment of the present invention, the relative power levels of ACKs 206 and NACKs 204 are modified depending on the proportion of ACKs and NACKs sent. For example, this adaptation could be controlled by a time-weighted average of the proportion of ACKs 206 sent. The detection threshold at the BS 100 could [be] adapted in a similar way based on the proportion of ACKs 206 received. It is apparent that such processes would converge, even in the presence of errors.

[0026] In another embodiment of the present invention, instead of being predetermined the ACK/NACK power offset (or maximum offset) could be signalled by the BS 100 depending on the type of service being conveyed to the MS 110 via the data packets 202. For example, in a real-time streaming service with strict timing constraints, a packet which is lost due to a wronglydetected NACK 204 may simply be ignored by the application if there were not enough time even for a physical layer retransmission. However, for a data service where correct receipt of packets was essential, an ACK/NACK power offset

could be signalled. The offset might also be useful in streaming services with slightly less strict timing requirements, where there was insufficient time for a higher-layer retransmission, but a NACK power offset would increase the chance of an erroneous packet being rectified by means of fast physical layer retransmission. It would therefore be beneficial to allow a different offset value to be signalled for each downlink transport channel.”

139.

In summary, the problem identified in [0004]-[0005] can be solved by transmitting NACKs at a higher power than ACKs, which reduces the error probability for NACKs without increasing the power transmitted for ACKs. If a particular MS is transmitting more NACKs than ACKs for some reason, this would increase uplink interference rather than decrease it, but this can be avoided by preventing the MS from applying the power offset unless it has ACK’d a certain percentage of packets. The relative power levels of ACKs and NACKs can be modified depending on the proportion of ACKs and NACKs sent or they can be signalled by the BS. The claims are directed to the latter embodiment. The Patent does not purport to solve any problem with DTX, but says that the invention is particularly advantageous if the probability of the MS missing a packet is very small, in which case there is no need to set the decision threshold to differentiate NACK from DTX.

140.

The specification elaborates on the embodiment in which the power levels are signalled by the BS at [0029]:

“In one preferred embodiment, particularly suitable for UMTS HSDPA, the ACK/NACK power offset used by the MS 110, as well as the ACK power level would be determined by higher layer signalling from the network. Alternatively, the offset could be signalled using a single information bit, signifying ‘no offset’ (i.e. equal transmit power for ACK 206 and NACK 204) or ‘use offset’, signifying the use of a pre-determined value of power offset. More signalling bits could be used to indicate a larger range of values of offset.”

As the skilled person would understand, “signalling from the network” would be relayed by the BS.

141.

At [0031] – [0047] the specification addresses the use of the REVERT signals and additional signalling parameters.

142.

The specification returns to the subject of the power offset being signalled by the BS at [0048]:

“In general, the power levels at which the ACK/NACK and/or REVERT commands are transmitted may be adjusted in order to achieve a required level of reliability. These power levels could be controlled by messages sent from the BS 100 to the MS 110. These could specify the power level relative to the pilot bits on the uplink dedicated control channel, or relative to the current power level for the channel quality metric. In the case of the dedicated control channels of one MS 110 being in soft handover with more than one BS 100 the power of the uplink dedicated control channel is not likely to be optimal for all the BSs 100 involved. Therefore, a different power level, preferably higher, may be used for sending the ACK/NACK and/or REVERT commands. This power difference could be fixed, or determined by a message from a BS 100. When the transmission of ACK/NACK and/or REVERT is directed to a particular BS 100, the power level may be further modified to take into account the quality of the radio channel for that transmission. For example, if the best radio link from the active set is being used, the power level may be lower than otherwise.”

143.

As the Defendants emphasise, the Patent contains no details of how to implement the invention: it does not provide any details of the modulation scheme or how the power levels are to be determined or of the signalling from the BS. It is common ground, however, that the skilled person would be able to implement the invention using their common general knowledge. Thus, although [0048] does not spell out precisely how messages sent from the BS “could specify the power level relative to the pilot bits on the uplink dedicated control channel”, Mr Edwards’ evidence was that the skilled person could implement this using their common general knowledge by setting a target SNR for the pilot bits and arranging for the BS to monitor whether this SNR was above or below the target, and then to use that information to adjust the ACK/NACK channel.

144.

As the Defendants also emphasise, the Patent does not specify what error performance targets can be achieved or by how much the average power can be reduced.

145.

There is no dispute that the invention has a number of advantages, although the Defendants contend that the advantages can also be realised from Shad. In particular, the scheme is a flexible one that permits the powers of the ACKs and NACKs to be modified independently, allowing error performance targets to be achieved at lower average power and different data services to be handled differently; and it facilitates soft handover in the uplink when using HSDPA in the downlink. This is achieved with only a modest increase in system complexity.

The claims

146.It is common ground that the only claim which is necessary to consider is claim 10. Broken down into integers and omitting reference numerals, this is as follows:

“[1] A secondary station [i.e. MS] for use in a radio communication system

[2]

having a communication channel for the transmission of data packets from a primary station [i.e. BS] to the secondary station,

[3]

wherein receiving means are provided for receiving a data packet from the primary station

[4]

and acknowledgement means are provided for transmitting a signal to the primary station to indicate the status of a received data packet,

[5]

which signal is selected from a set of at least two available signal types,

[6]

wherein the acknowledgement means is arranged to select the power level at which the signal is transmitted depending on its type

[7]

and in dependence on an indication of the power level at which each type of signal is transmitted, the indication being signaled from the primary station to the secondary station.”

Construction

147.There is no dispute as to the interpretation of claim 10. It is common ground that the MS must be capable of receiving from the BS the indications of the power levels at which ACKs and NACKs are to be transmitted. It is also common ground that it covers an MS that can be used in a system where the gains on ACKs and NACKs can be set independently by the network, but is not limited to such a system. Thus it also covers an MS that is suitable for use in a system where there is a fixed differential relationship between the gains on ACKs and NACKs. It follows that the inventiveness of claim 10 cannot be judged by reference to the embodiments which have the advantages which flow from the ability to set the gains independently. Nor should be it judged by reference to the level of performance achieved, since this is not a feature of the claim.

The skilled person

148.

There is a small, but nevertheless potentially significant, dispute between the parties as to the identity of the person skilled in the art to whom the Patent is addressed. (The Defendants, and Mr Gould, referred to a skilled team; but this is not a case involving different disciplines, which is why both parties called a single expert.) It is common ground that the skilled person would have a degree in electronic engineering (or a similar subject) and would have worked in the mobile communications industry for at least two years. It is also common ground that the Patent is particularly addressed to someone who is working on UMTS, especially HSDPA. The dispute is as to whether or not the skilled person would be a regular attendee of 3GPP standardisation meetings, as the Defendants contend.

149.

Mr Edwards explained that regular attendees (referred to as “standards delegates”) were generally inventive individuals who were often named on patents and that, in addition to such regular attendees, there were also people who either occasionally attended meetings when a specific topic of interest was to be discussed (such as himself) or worked behind the scenes providing support to standards delegates. He expressed the opinion that the Patent was addressed to the latter two categories of person as well as standards delegates. Mr Gould accepted that the latter two categories of person existed, and did not identify any reason for thinking that the Patent would only be of interest to standards delegates.

150.

Counsel for the Defendants nevertheless argued that it was legitimate to test the obviousness of the Patent from the perspective of standards delegates, since, if it was obvious to a standards delegate, it was immaterial that it might not be obvious to occasional attenders and workers behind the scenes. I do not accept this argument. The Patent does not cover more than one field of activity so as to bring more than one kind of addressee into play. Accordingly, it is addressed to a single kind of skilled person. But the skilled person to whom it is addressed is not restricted to those who are most skilled in the field i.e. the standards delegates. Thus there is no evidence that the Patent could only be implemented by a standards delegate and not by an occasional attender or worker behind the scenes. I would add that, if the skilled person was a standards delegate, Mr Gould would not be representative of the skilled person.

151.

Accordingly, I conclude that the skilled person may be either a standards delegate or an occasional attender or a worker behind the scenes. It follows that the common general knowledge of the skilled person is the knowledge that is common to all three groups of people.

Common general knowledge

152. It is common ground that everything I have set out in the technical background section of this judgment was part of the common general knowledge.

cdmaOne

153.There was some dispute as to the extent of the skilled person’s knowledge of cdmaOne. On the point that matters, however, which is the skilled person’s knowledge of uplink power control in cdmaOne, the experts were more or less agreed that this would be reflected in the content of the relevant part of Mr Gould’s book, namely section 4.3.6.1 at pages 281-283, which deals with the subject at a fairly high level. It shows that IS95/cdmaOne used both open and closed loop processes.

Differential gain on channels and fields

154.The experts were agreed that the skilled person would know that in UMTS different channels could be transmitted at different powers depending on their differing error requirements and different fields within downlink channels could be sent at different power levels. When using BPSK in UMTS, the skilled person would be aware that the signal for one field in a channel could be transmitted with a particular gain and another field in the same channel could be transmitted with a different gain.

Differential powers in binary antipodal signalling

155.As discussed above, Mr Gould said in his first report that it was common general knowledge that there were three variables that could be altered to change the error probabilities in a binary signalling scheme, namely the voltage of the decision threshold, the voltage of the first signal and the voltage of the second signal, and hence increasing the power of either signal would decrease the error probability for that signal. As he accepted in cross-examination, however, none of the three books he relied on disclosed binary antipodal signalling with different powers for each signal and the only pre-Priority Date document disclosing this in evidence is Shad. Moreover, Mr Edwards did not agree that this was common general knowledge. Accordingly, I conclude that it

was not. By contrast, there is no dispute that it was indeed common general knowledge that the decision threshold could be moved from 0.

Power control of uplink channels in UMTS

156.

It was common ground between the experts that the skilled person would be aware that the generally accepted method of power control for uplink channels is for the BS to control the MS. Moreover, there are good technical reasons for this. In UMTS this was done by means of a closed loop power control system which adjusted the power based on a given SIR target (see paragraph 116 above).

157.

Counsel for the Defendants submitted that this meant that the skilled person would be aware that “the power control of uplink channels in UMTS was 100% controlled by” the BS. Counsel for Philips disputed this, and pointed out that it was Mr Edwards’ evidence that UMTS Release 4 permitted the gain factors βc and βd to be calculated by the MS based on a signalled setting for a reference TFC in the alternative to being signalled by the BS as described in paragraph 116 above. Moreover, this evidence is supported by paragraph 5.1.2.5.1 of TS 25.214 v4.2.0, to which Mr Gould referred in his first report when setting out the common general knowledge. I doubt that it matters, but I accept the submission of counsel for Philips on this point.

TS 25.308 and TR 25.855

158.

It is common ground that all of TS 25.308 v5.0.0, which is the technical specification for HSPDA referred to above, was common general knowledge and would have been the starting point for further development of HSPDA. Reference [1] in TS 25.308 v5.0.0 is TR 25.855, although reference [1] is not specifically cited anywhere in the TS. The Defendants contend that TR 25.855 v5.0.0 was common general knowledge, whereas Philips disputes this. It is important to note before proceeding further that much of TR 25.855 v5.0.0 was incorporated into TS 25.308 v5.0.0. Accordingly, what matters is whether the parts that were not incorporated were common general knowledge.

159.

Both TS 25.308 v5.0.0 and TR 25.855 v5.0.0 were approved at the 13th 3GPP TSGRAN meeting in Beijing on 18-21 September 2001. Counsel for Philips put to Mr Gould a print-out from the 3GPP portal for TR 25.855 which states “Spec is Withdrawn from this Release at RAN #13” and suggested that this showed that TR 25.308 had been withdrawn. This was not a suggestion which had been advanced in any of Mr Edwards’ four reports, however, and it is inconsistent with the minutes of the meeting. The minutes of the 3GPP TSG-RAN WG1 HSDPA meeting in Sophia Antipolis on 5-7 November 2001 state that the latest version of TS 25.308 “has replaced TR on HSDPA”, but that was after the Priority Date. Moreover, that statement may simply have meant that the TR was no longer being worked on. Consistently with that, there are references to TR 25.855 in post-Priority Date documents.

160.

Mr Gould pointed out that there were numerous references to earlier versions of TR 25.855 in pre-Priority Date 3GPP contributions, but TR 25.855 v5.0.0 replaced those earlier versions and the earlier versions pre-dated TS 25.308 v5.0.0.

161.

Mr Edwards accepted that, in general terms, TRs supplemented TSs, could contain further commentary on relevant issues, could contain information about the current status of a feature that had not been finalised and could contain background information.

He also agreed that standards delegates, and in particular those working on HARQ for HSDPA, would be aware of the content of TR 25.855, but not that those working behind the scenes would be aware of it or would consult it.

162.

On the basis of that evidence, counsel for the Defendants submitted that it followed that TR 25.855 was common general knowledge, whereas counsel for Philips submitted that it did not follow. Given my conclusion as to the identity of the skilled person, I conclude that this evidence does not establish that all of TR 25.855 was common general knowledge. This conclusion does not matter, however, because the only passage in TR 25.855 that the Defendants rely upon relates to the next point, which requires separate consideration.

Soft handover in HSDPA

163.

The Defendants contend that it was generally appreciated that, during soft handover in HSDPA, the serving BS that was sending the downlink data via HS-DPCH would need to be the BS that received the HARQ response, and that therefore if normal UMTS power control was applied for the HARQ response on the uplink HSDPA control channel, it may not be decoded reliably by the serving BS with sufficient power. Accordingly, so the Defendants contend, the skilled person would have appreciated that (i) it was necessary to have a mechanism to ensure reliable detection of ACKs/NACKs at the serving BS, (ii) one such mechanism would be to allow the BS to control the transmit power of the ACK/NACK signals at the MS relative to the other signals transmitted by the MS and (iii) an appropriate offset could be calculated based on the SIR of the uplink.

164.

Philips disputes that either the problem or the proposed solution were part of the common general knowledge. As I see it, the important question is whether the proposed solution was common general knowledge, but I accept that the two are linked and so I shall consider both.

165.

Counsel for the Defendants submitted that Mr Edwards had accepted that the problem was common general knowledge in the following passage of cross-examination:

“Q. The final topic on the common general knowledge, Mr. Edwards. We discussed on Friday the power control mechanism. A. Yes.

Q. And how that worked when a mobile was in soft handover; do you remember that? A. Yes.

Q. In that circumstance, in the UMTS circumstance, the mobiles power is set to the power of the base station that requires the least power? A. Yes.

Q. But the skilled person would obviously be aware in HSDPA that the data, the high speed data, is only from and to the single serving HSDPA base station? A. Correct.

Q. What that might mean is there might be fading on the channel between the mobile to a serving base station, but that would not be taken account of by ordinary power control, because the power control would be being controlled by a different base station?

A.

Yes.”

166.

This submission is based on selective quotation, because the very next question and answer were as follows:

“Q. The skilled person would recognise that was a problem that could happen?

A.

The skilled person would recognise it as a question, and then based on some of the documents it did appear that the uplink signalling channel was quite robust. There are one or two pieces which no doubt you will come on to where it is discussed, but the general discussion was not around that point.”

Thus what the witness was saying was that it was known that there might be an issue, but it was not generally thought to be a problem.

167.

The matter does not end there, however, because, as the witness correctly anticipated, the Defendants also rely upon a number of contributions to 3GPP which recognised the problem and proposed the solution to it:

i)

R1-01-0571, a contribution by Ericsson to TSG-RAN WG1 meeting 20 in Busan, Korea on 21-25 May 2001. This notes at page 2 that “the Hybrid-ARQ signalling may need to be transmitted with a different power, compared to the other DPCCH fields, as the required power for the Hybrid-ARQ signalling may depend on e.g. whether the UE is in soft handover or not” and argues for an approach that “allows for simple independent power setting for DPCCH and uplink Hybrid-ARQ signalling. As already mentioned, the required received energy per Hybrid-ARQ ‘acknowledgement’ may vary significantly between a soft-handover and a non-soft handover situation.” Mr Edwards accepted that this was recognising that the power of the ACK/NACK signalling might need to be separately controlled in the soft handover situation because the serving HSDPA BS might not be the one that was doing the power control.

ii)

R2-01-1177, a contribution by Nokia to TSG-RAN WG2 meeting 21 in Busan, Korea on 21-25 May 2001. This proposes various “HSPA related signalling parameters in downlink”, one of which is described on the fourth page as follows:

Power offset for uplink control channel

This will inform to the UE what kind of power offset it should use in uplink, when sending e.g. ACK during soft handover. NodeB could estimate the SIR from uplink, and calculate the needed power offset in uplink, in order to make sure that ack can be decoded reliably.”

Mr Edwards accepted that this set out the problem and the proposed solution.

iii)R1-01-0874, a contribution by Samsung to TSG-RAN WG1 meeting 21 in Turin, Italy on 27-31 August 2001. This begins by listing the downlinksignalling parameters discussed in TR 25.855, one of which is “Power offset for uplink control channel”. It goes on to say on the eighth page:

2.8 Power offset for uplink control channel

When UE is in soft handover region, the uplink power level can be inappropriate. Therefore, power offset for uplink control channel is needed. Example proposals on the number of bits required for signalling UL power offset are shown in Table 10.

… This information does not need to be sent before HS-PDSCH and it should be received by UE only before the ACK/NACK will be sent. … ”

Mr Edwards accepted that what Samsung was doing in this paper was “gathering together what everyone was talking about in the standardisation meetings and reflecting that back”.

168.In addition, the Defendants rely upon section 9.1.7 of TR 25.855 v5.0.0, which forms part of a discussion of downlink signalling parameters and states:

9.1.7 Power offset for uplink control channel

This informs the UE what kind of power offset it should use in the uplink, when sending e.g. ACK during soft handover. Node B could estimate the SIR from the uplink, and calculate the needed power offset in the uplink, in order to make sure that an ACK can be decoded reliably. This information may be sent at a much lower rate than the other parameters described in this section.”

169.

In his second report Mr Edwards accepted that this reflected “one school of thought”, but said that he did not think that this was “the accepted view in the industry”. The heading and the first two sentences first appeared in v0.0.2 of TR 25.855 following the WG1 and WG2 ad hoc meeting on HSDPA in Sophia Antipolis on 5-6 April 2001. Mr Edwards accepted in cross-examination that the problem and solution had been captured in the TR; that, on the face of it, this reflected the agreement of RAN WG1 and WG2 at the time; and that there were no adverse comments on the paragraph through to October 2001.

170.

The Defendants also rely upon what is said about soft handover in the Patent at [0048] and suggest that this assumes that the skilled person is aware of the problem in HSDPA. When this was put to him, Mr Edwards accepted that it assumed that the reader understood what soft handover was and that it had ceased to work in the way it had in earlier Releases, but did not accept that the reader would need to know any more than that.

171.

As for Mr Gould, he expressed the view in his reports, and maintained in crossexamination, that the problem and the proposed solution would have been well known to the skilled person, although his view was essentially based on his reading of the documents.

172.

The conclusion I draw from the evidence as a whole is that the skilled person would have been aware that there was a potential problem with power control of ACK/NACK signals during soft handover and that a solution to it had been proposed in form of a power offset (i.e. a gain factor) applied to the ACK/NACK signals equally. It had not been agreed that the offset solution should be adopted, which is why it was not in TS 25.308. It appears that the reason why it had not been adopted was that there was no consensus as to how serious the problem was. But there appears to have been no disagreement that the proposed solution was a technically viable one if there was a real problem.

Agreed key points

173.Counsel for the Defendants set out in his closing submissions a list of key points of common general knowledge, many of which were accepted by counsel for Philips or accepted with qualifications. The agreed key points were as follows:

i)

The cost of a false ACK is more significant than the cost of a false NACK.

ii)

Signals sent at higher powers are more reliably detected at the BS, but the use of more power may increase interference between signals in a CDMA system. The probability of a receiver correctly interpreting a signal can be manipulated by varying the power at which the signal is sent, and increasing interference at a BS would decrease the total system capacity. This can be thought of as “Shout louder if you want to be more sure you will be heard”.

iii)

Reducing interference at the BS is beneficial in that it makes it easier for the BS to receive signals from the MSs it is serving.

iv)

In conventional modulation, the further away the received voltage of a signal from the decision threshold, the lower the probability of error.

v)

As discussed above, the generally accepted method of power control for uplink channels was for the BS to control the MS.

vi)

The way fast fading was dealt with in CDMA was by closed loop power control. This worked by the BS monitoring the uplink signal from the MS and comparing it to a target SNR, which was related to the number of errors (the higher the SNR, the lower the errors, and vice-versa).

vii)

At the Priority Date a new uplink control channel for HSDPA was proposed and specified in TS 25.308.

viii)

It had been decided that the HARQ protocol for HSDPA would use a MultiChannel SAW process, which was asynchronous on the downlink and synchronous on the uplink. The acceptable error rates for the ACK and NACK messages in HSDPA had not been agreed at the Priority Date, however.

ix)

It was known that in UMTS Release 4 the closed loop power control mechanism ensured that a MS in soft handover with two or more BSs transmitted sufficient power to communicate with at least one BS (i.e. the BS(s) with the best uplink channel quality).

x)

It was known from UMTS Release 4 that uplink power levels could be set by the BS relative to the uplink power of the pilot bits sent on the DPCCH.

xi)

The skilled person would not be concerned by the possibility of errors due to DTX in the context of ACK/NACK signalling in HSDPA.

System design

174.It was agreed between the experts that the skilled person would not want to introduce additional complexity into the system without some real-world benefit. It was also agreed that, whereas BSs can be updated relatively easily through software changes, the same is not true of MSs (although new features can be supported by new phones).

Motorola 021

175.

Motorola 021 is a proposal for the uplink and downlink control channel structures for HS-DSCH in HSDPA (see paragraphs 123-125 above). The Defendants focus on the disclosure of Motorola 021 as far as it relates to the uplink control channel structure. This disclosure is very brief.

176.

The Summary on page 1 explains:

“It is desirable to use BPSK coherent detection for transmitting the ACK bit on the uplink so that a 10-5 false alarm rate with approximately 0.99 probability of detection can be maintained.” 177.It is common ground that the skilled person would appreciate that:

i)

the ACK bit signals an ACK or NACK depending on its value;

ii)

the reference to a “10-5 false alarm rate”, means a probability of 0.00001 of the BS interpreting a transmitted NACK as an ACK i.e. a 1 in 100,000 chance of there being a false ACK; and

iii)

the reference to “0.99 probability of detection” means a 99 in 100 chance of the BS correctly interpreting a transmitted ACK as an ACK i.e. a 1 in 100 chance of there being a false NACK.

178.

Thus the skilled person would see that the system being proposed has been designed such that the probability of the BS misinterpreting a NACK (a false ACK) is much lower than the probability of misinterpreting a ACK (a false NACK). This accords with the common general knowledge that rectifying a false NACK is likely to be less costly in that it only impacts on the lower protocol layer, whereas a false ACK will impact on the higher protocol layers.

179.

Motorola 021 describes its proposed structure for the uplink DPCCH in section 2.0. The uplink DPCCH from UMTS Release 4 is to be modified by adding two new fields, an acknowledgement (Ack) field with Nack bits for HARQ and a measurement (Meas) field with Nmes bits for reporting the downlink channel quality. The DPCCH is to retain the pilot, TFCI, FBI and TPC fields from Release 4 (see paragraph 114 above).

180.

Figure 1 shows the proposed arrangement of these fields in a slot of the control information in the uplink DPCCH.

181.

Example values of the fields are set out in Table 1. This shows that Nack may be 4 or 6 bits depending on the slot format.

182.

Motorola 021 states at page 2:

“The ACK/NACK bits are sent using BPSK modulation i.e. if the HS-DSCH packet is decoded correctly an ACK bit (+1) is transmitted and if it is decoded in error a NACK bit (-1) is transmitted. With the proposed slot format the ACK bits are repetition coded 20 or 30 times. A separate gain control may be used for ACK bits so that those bits can be decoded with high probability (0.97-0.98) and with low probability of false alarm (1e-05) at Node-B. The ROC for optimal coherent BPSK demodulation given 1 path and a single receive antenna in AWGN in Figure 2.”

183.

Figure 2 is reproduced below.

184.

Figure 2 is a Receiver Operating Characteristic (ROC) curve. Mr Edwards’ evidence was that the skilled person would be familiar with ROC curves. Mr Gould did not agree with this, but acknowledged that they are explained in Viterbi, CDMA Principles of Spread Spectrum Communication (Addison-Wesley, 1995) at 50-52, which is reference [2] in Motorola 021. The horizontal axis is “PDet”, which indicates the probability of correct detection by the receiver of a transmitted signal (here an ACK). The vertical axis is “PFa” which is the probability of a false alarm i.e. a false ACK. Thus the graph shows the trade-off between the probability of detection and the probability of a false alarm under the conditions stated.

185.

The heading indicates that the curve has been plotted for an SNR (EC/N0) of -25 dB. The value T=5 would be understood by the skilled person to mean an integration over five slots in one TTI i.e. five repetitions of the ACK field (each of which contains either 4 or 6 bits according to the slot formats proposed in Table 1). As a result, in that TTI, there will be either 20 or 30 ACK/NACK bits (hence the statement in the text that “the ACK bits are repetition coded 20 or 30 times”).

186.

The curve shows a series of different possible operating points corresponding to different settings of the detection threshold. It can be seen from the graph that a false alarm rate of 10-5 (1.00E-05) or less can be achieved with a probability of detection ranging from 0.973 to 0.9975. If the detection threshold is biased for detection, the target of a 10-5 false alarm rate with a probability of detection of 0.99 can be comfortably met.

187.

There is a dispute as to how the skilled person would interpret the sentence “A separate gain control may be used for the ACK bits so that those bits can be decoded with high probability (0.97-0.98) and with a low probability of false alarm (1e-05) at Node-B.” Mr Edwards’ consistent evidence was that the skilled person would interpret this as meaning that the gain on the Ack field could be changed relative to the other fields (i.e. irrespective of whether an ACK or NACK was being sent).

188.

Mr Gould’s evidence in his first report was that:

“7.79 The reference to ‘gain control’ could mean changing the gain on the bit irrespective of the information it carries (ACK or NACK) or it could mean changing the gain on the ACK bit relative to the NACK bit. I believe the skilled team would probably have understood that the authors of the paper meant the former. There is the mention of ACK ‘bits’ as opposed to the ACK bit and NACK bit and there is no discussion elsewhere in the paper about altering the relative gain on the ACK and NACK bits.

7.89 I believe that the skilled team would probably understand Motorola to be suggesting [applying the same power gain to both the ACK and NACK signals, but biasing the detector by moving the decision threshold], although it is not clear. …”

189.

Mr Gould’s evidence in his second report was that:

“6.6 … On balance I think that the skilled team would understand that in Motorola the different target error rates for false ACKs and false NACKs have been achieved by biasing the detector (as I said at Paragraphs 7.89 and 7.92 of my First Report), most likely in conjunction with an additional separate gain to the ACK/NACK bits (see my Paragraphs 6.4 to 6.5 above). …

6.16 At Paragraph 375, Mr Edwards makes the point that Motorola only refers to a single gain for the ACK bits field. Whilst I believe that this is how the skilled team would interpret the gain control described in Motorola, as I have explained in my First Report and above, the skilled team would be aware that the asymmetric error rates set out in Motorola could be obtained either by applying an equal gain to the ACK field bits and biasing the detector, or by applying unequal gains to the two signals. The two were obvious technical alternatives.”

190.

Despite this, during cross-examination, Mr Gould said that, reading Motorola 021 down to the sentence in question, the skilled person would think that what it meant was that different gains were to be applied to ACKs and NACKs, but that the ROC curve in Figure 2 would lead the skilled person to conclude that that was not what was meant, but rather a single gain was being applied to the ACK field. Although Mr Gould’s final position was the skilled person would ultimately interpret Motorola 021 as disclosing a single gain, counsel for Philips submitted, and I agree, that his evidence in crossexamination is significant for the following reasons.

191.

First, Mr Gould’s reason for saying that the sentence in question would be read in the manner he suggested was very unconvincing. The natural way to read it is as disclosing

gain control for the Ack field for the reasons given by Mr Gould himself in the last sentence of paragraph 7.79 of his first report, and that is how it was read in a later, prePriority Date contribution 20(01)0477 by Qualcomm to WG1 meeting 20 in Busan, Korea on 21-May 2001 (as well as being confirmed in a later, pre-Priority Date contribution R1-01-0744 to the WG1 meeting in Korpilampi, Finland on 26-28 June 2001 by Motorola itself). In cross-examination Mr Gould suggested that applying differential gains to the ACKs and NACKs was “the only way you can get the different error rates”, but as he accepted elsewhere this is possible with a single gain and a biased detection threshold.

192.

Secondly, Mr Gould’s reason for saying that the skilled person would nevertheless ultimately come to the conclusion that Motorola 021 disclosed a single gain for the Ack field was also very unconvincing. He said that this was because of the ROC curve. But he also said that the ROC curve could have been generated either way and that the skilled person would not know how it had been generated. Moreover, Mr Edwards’ unchallenged evidence was that the ROC curve related to different detection thresholds.

193.

Thirdly, in my judgment what this shows is that, as mentioned in paragraph 16 above, Mr Gould was influenced in his reading of Motorola 021 by his mistaken view that the use of differential powers in binary antipodal signalling was common general knowledge, that is to say, by hindsight.

Obviousness over Motorola 021

194.Difference between Motorola 021 and claim 10. It is common ground that the only real difference between Motorola 021 and claim 10 is that Motorola 021 does not disclose differential powers being used to transmit ACKs and NACKs. (Although Motorola 021 does not expressly disclose that the gain factor is signalled by the BS to the MS, Mr Edwards accepted that that would be a standard way of implementing it. He also accepted that a straightforward way to do that would be for the BS to measure the SNR of the pilot bits and signalling the MS to adjust the ACK/NACK power accordingly.)195.Primary evidence. The Defendants’ case is concisely summarised in paragraph 6.16 of Mr Gould’s second report quoted in paragraph 186 above. This is that the skilled person reading Motorola 021 would be aware from their common general knowledge that the error rates in Motorola 021 could be achieved either by applying a single gain to the ACK field and biasing the detector (in addition to the repetition coding) or by applying different gains to the ACK and NACK signals, and that these were obvious technical alternatives.

196.

In my judgment this case fell apart during Mr Gould’s cross-examination for the reasons explained above. In summary, it was not common general knowledge that different gains could be applied to binary antipodal signals, let alone ACK and NACK signals; Mr Gould’s mistaken view that it was common general knowledge was based on hindsight; and this affected his reading of Motorola 021 and his opinion as to what obvious in the light of it.

197.

An additional point is that Mr Gould accepted that the skilled person would consider that the proposal disclosed in Motorola 021 outperformed the target error rates at an SNR at the receiver of -25 dB. As Mr Gould also accepted, the skilled person would have regarded that SNR as a reasonable reception target, since -25 dB is lower than -24

dB and both Motorola (in R1-01-0744) and WG1 (in the minutes from an ad hoc meeting in Espoo, Finland on 26-28 June 2001) recorded that -24 dB was reasonable. Thus the skilled person would consider that following Motorola 021’s approach would be likely to enable them to achieve the same target error rates at an even lower SNR. Accordingly, in my judgment, the skilled person would not be motivated to change Motorola 021’s approach.

198.

Mr Edwards’ evidence in his reports was that there was nothing in Motorola 021 to point towards applying different powers to the ACKs and NACKs and that this would require a significant shift in thinking since the use of different powers in BSPK was not common general knowledge. Counsel for the Defendants relied upon a passage of Mr Edwards’ cross-examination which he submitted showed that Mr Edwards had accepted that it was obvious to arrive at the invention, but that passage was based on asking Mr Edwards to assume that the skilled person was considering implementing Motorola 021 using the two options described by Mr Gould, which depends on the second option being common general knowledge.

199.

Secondary evidence.None of the pre-Priority Date contributions to 3GPP suggests any consideration of network-controlled independent power for ACK and NACK signal types. Instead, there was a general understanding and expectation that Multi-Channel SAW was robust against the identified error cases. The Defendants contend that this can be explained by the way in which the settling of the relevant aspects of Release 5 Standard was progressed, with many different proposals for HARQ schemes being put forward in the period May-July 2001, and that it was only following the HSDPA ad hoc meeting in Helsinki, Finland on 27-31 August 2001 that attention could be focussed on issues like coding and transmit power. Even so, it seems to me that this evidence is more consistent with non-obviousness than obviousness.

200.

Perhaps more importantly, Mr Gould could not explain why, if it was obvious, Motorola had missed the alternative to its approach not only in Motorola 021, but also in Motorola’s follow-up contributions before the Priority Date.

201.

The Defendants rely on post-Priority Date contributions from LG and Lucent to WG1 meeting 23 in Espoo on 8-11 January 2002 as confirming that differential ACK/NACK powers was obvious. But that pre-supposes that LG and Lucent came up with the invention independently from Philips (and each other). The Defendants did not adduce any evidence from either LG or Lucent to establish this, however. Counsel for the Defendants simply relied upon the absence of any reference to Philips’ contributions

R2-24(01)2366 and R2-24(01)2368 to the WG2 meeting in New York on 22-26 October 2001 which disclosed the invention to 3GPP as showing this (or at least as being sufficient to shift the onus of proving the contrary onto Philips, which had not led any evidence of fact on the question). I do not accept this, since it is possible that LG and Lucent were aware of Philips’ contributions despite the absence of such reference. For example, Dr Farooq Khan of Lucent attended both the October 2001 WG2 meeting and presented the Lucent paper at Espoo. In any event, I agree with counsel for Philips that the mere fact, if fact it be, that LG and Lucent came up with the same invention shortly after the Priority Date is insufficient to establish that it was obvious at the Priority Date.

202.

Conclusion. I conclude that claim 10 was not obvious in the light of Motorola 021.

Shad

203.

Shad relates to an extension to cdmaOne called 1EXTREME, also known as EV-DV. It sets out a proposal for the uplink HARQ acknowledgement channel. Although Shad contains quite a lot of mathematics, it is not necessary to delve into the mathematics for present purposes.

204.

The abstract states:

“In this contribution the transmit gains for an antipodal signaling scheme in which the transmit probabilities are known a priori is jointly optimized with the receiver hard decision device threshold value in order to obtain the required error probabilities for a minimum bit SNR. This type of signaling for example applies to the Hybrid ARQ acknowledgement channel in which the average frame error rate is known to the transmitter, and certain false acknowledgement and false negative acknowledgement probabilities are prescribed by the upper layers.”

205.

The reference to Hybrid ARQ means that the antipodal signals are acknowledgements and negative acknowledgements generated by a HARQ scheme (i.e. ACKs and NACKs). The Frame Error Rate (FER) would be understood as a measure of how often the MS has not correctly received a transmitted data packet, and therefore has to send a NACK to the BS.

206.

The abstract indicates that, if the probability of sending an ACK or a NACK is known (“the transmit probabilities are known a priori”), one can jointly optimise the transmit power applied to the ACK and NACK signals (“the transmit gains for an antipodal signalling scheme”) and the detector threshold (“the receiver hard decision device threshold value”). The purpose of the optimisation is “to obtain the required error probabilities for a minimum bit SNR”. Minimising the bit SNR means minimising the signal power being delivered to the BS (i.e. each MS transmitting ACK and NACK messages to the BS with as little power as possible) which in turn will reduce the interference.

207.

The statement that the false ACK and false NACK “probabilities are prescribed by the upper layers” means that the target required probabilities of false ACK and false NACK are not set at the physical layer of the protocol stack, but at some higher layer.

208.

The introduction in section 1 states:

“The objective of this contribution is to obtain the optimal power allocations to an antipodal signaling scheme such that the required performance is achieved with a minimum bit SNR. This is done by applying unequal gains to the transmit voltages of the two possible signals. At the receiver, the threshold of the hard decision device is biased so that the required error rate is achieved for each of the two types of errors.”

209.

Thus Shad proposes that ACKs and NACKs are transmitted using different powers and the detection threshold at the BS is biased so as to achieve the required error rates i.e. the required levels of false ACKs and false NACKs.

210.

Shad describes the problem in mathematical terms in section 2.1 by reference to Figure 1 reproduced below:

211.

In Figure 1:

i)

s1” is an ACK signal, and “s2” is a NACK signal.

ii)

The probability “p” is the probability of transmitting “s1” (i.e. an ACK) and the probability “1-p” is the probability of transmitting “s2” (i.e. a NACK). In other words, the probability of the MS transmitting either an ACK or NACK is 1. This is related to the quality of the downlink channel (amongst other things).

iii)

k” is the gain to be applied to the power of the ACK signals and “l” is the gain to be applied to the power of the NACK signals. These affect the voltage at which each type of signal is sent by the MS: the higher the gain, the higher the voltage (which depends on the square root of the transmit power, or the square root of the applied gain k or l).

iv)

z” is the decision threshold in the receiver. If a received signal is higher than z, it is assumed to be an ACK, otherwise it is assumed to be a NACK. Thus moving z to the left decreases the probability of false NACKs, but increases the probability of false ACKs.

v)

“pfack” is the probability of a falseACK.

vi)

“pfnack” is the probability of a false NACK.

212.

The text below Figure 1 in Shad states that:

“The goal is to minimize the bit SNR γb, defined by Equation 1, subject to the constraint that the false ACK probability remain below pfack-req and that the false NACK probability be below pfnack-req.”

213.

This confirms that the goal of the optimisation process described in Shad is to minimise the mean bit SNR γb of the ACK and NACK bits, which in turn means minimising the transmission power of these acknowledgement bits, within the constraint of ensuring that the probabilities of false ACKs and false NACKs (i.e. pfack and pfnack) do not exceed the pre-defined threshold values pfack-req and pfnack-req.

214.

Shad then refers to Appendix A, in which the optimal decision threshold z for a maximum a posteriori (MAP) detector is derived. The authors say that they have found “the MAP detector to be sub-optimal when pfack-req is different from pfnack-req”, and therefore they have devised an “exhaustive search optimisation algorithm for minimising γb”.

215.

In section 2.1 Shad goes on to describe this exhaustive search optimisation algorithm. This section models an idealised system to find the optimum values for k, l and z for an AWGN channel. The way the algorithm works is that a value of p is chosen (e.g. p=0.2) and then, starting with a SNR which is too low, an exhaustive search is carried out to find the values of k, l and z that result in the lowest SNR within the constraints on pfackreq and pfnack-req that are specified. If no solution is available for the given SNR, it is increased by a step (“say 0.25 dBb”) and the exhaustive search is carried out again for k, l and z. Multiple values of p are modelled to take account of the fact that p will vary dependent upon the downlink channel conditions during transmission.

216.

Table 1 in section 3 of Shad sets out the results of these calculations, showing the values of SNR, k, l and z for different values of p for the optimal detector and of SNR for the MAP detector assuming pfack-req is 10-6 and pfnack-req is 10-3.

217.

As Shad explains:

“From the table it can be seen that the required γb is minimal when p is either very small or very large. In these cases a large voltage is applied to the less likely signal, and hence the distance between the signal points is relatively large for a small γb as defined by Equation 1.It is also interesting that the decision threshold z tends to be biased in the direction of the ACK bit that is assigned a positive voltage when pfack_req << pfnack_req. This minimizes the chance of a false ACK at the expense of a higher probability of a false NACK. Finally, the optimal detector outperforms the MAP detector by approximately 2 dB for the selected parameters.”

218.

For example, the first row of the table shows the situation where ACKs are the less likely signal (p is 0.1 i.e. 10% ACKs and 90% NACKs), which results in a large gain on the ACK signals (k is large) and a small gain on the NACKs (l is small). In the final row of the table NACKs are the less likely signal (p is 0.9 i.e. 90% ACKs and 10% NACKs), resulting is a small gain on the ACK signals (k is small) and a large gain on the NACKs (l is large).

219.

When the value of p is around 0.6, the gain applied to the ACKs and NACKs is approximately equal. Mr Gould’s evidence was the skilled person would probably think that the gain ought to be equal at p=0.5 in a truly optimised system, and attribute the difference to the step size used by Shad in the algorithm. Mr Edwards did not agree with this, and thought that the skilled person would take Shad at face value. This disagreement is related to the disagreement I consider in the next two paragraphs; but I shall return to Mr Gould’s evidence on this point later.

220.

When considering the comparison with the MAP detector, it was common ground between the experts that the skilled person would be aware that a MAP detector attempts to minimise the total number of errors (or, to put it another way, the average error probability). This means that, where there are different error target rates, the MAP detector will be dominated by the most stringent one (here 10-6). As Mr Edwards pointed out, Shad states that the values of z, k and l in Table 1 are those for the optimal detector; it does not give the corresponding values for the MAP detector. Mr Gould’s evidence was that it was implicit that k and l were the same (but not z). Mr Edwards accepted that, if one made that assumption, it was possible to calculate the z, pfack and pfnack values for the MAP detector for the values of SNR, k and l given in Table 1 using one of the equations in Annex A. If this is done, the pfack values do not quite meet the

10-6 target in some cases (but slightly exceed it in others). Mr Gould’s explanation for this was that Shad used 0.25 dB step sizes so it was not possible to obtain a SNR figure to the level of granularity that produced precisely 10-6. Had Shad gone for an SNR value 0.25 dB higher (in the case of the values of p for which the rates that were slightly under 10-6) or 0.25 dB lower (in the case of the values of p for which the rates were slightly over 10-6), he would have over- or undershot the rates. Shad was therefore simply reporting the SNR value that gave the error rates closest to the target so as not to overstate the benefits of the optimal detector.

221.

I accept Mr Gould’s explanation; but, given that none of this is explained by Shad and that the skilled person would have to do the calculations and then think through the consequences of Shad’s selected step size in order to see this, I accept Mr Edwards’ evidence that the comparison with the MAP detector is “poorly explained and would not be properly understood by the skilled person”. Mr Edwards thought that the skilled person would therefore view the claimed 2 dB benefit “with some scepticism”. He accepted, however, that the skilled person would understand at a general level that Shad was claiming that there was an improvement.

222.

Furthermore, as Mr Edwards also accepted, it can be seen from Table 1 that there is not just an advantage between the optimal detector and the MAP detector. Both detectors show that there is an average power saving for each detector of several dB as between the SNR required to achieve the error rate at p=0.5 and the required SNR at low or high

p value. Thus Shad shows that power savings can be made by differentially powering the ACKs and NACKs depending on the probability of transmission.

223.

Section 4 of Shad is headed “Implementation Considerations”. This states:

“Due to the fading channel and power control, the actual EbNt requirement and optimal values of z, k, and l may be quite different from the values reported in Table 1. One possible approach for obtaining the correct values for z, k, and l in the context of the Reverse Acknowledgement Indicator Subchannel of 1XTREME is as follows. The ratio of k to l can be determined by the measured FER on the Forward Shared Channel. The mobile keeps track of pfack and pfnack. It can gather these statistics based on the number of duplicate and missing frames that are observed. If either pfack or pfnack are too high, the values of k and l are scaled up by a constant. If both pfack and pfnack are too low, then k and l are scaleddown by a constant. The value of z can be initialized based on a Gaussian channel assumption. Then it can be adjusted based on feedback from the mobile.”

224.

Thus what Shad describes as “one possible approach” is that the mobile can determine the ratio of k and l from the measured FER (and hence p, since p = 1 - FER) on the downlink channel from the BS to the MS, calculate pfnack and pfack by observing the number of duplicate and missing frames and then scale the values of k and l up or down, keeping the ratio between them constant for a given p. It also indicates that the threshold used at the BS, z, could be initialised with some value based on the assumption of a Gaussian channel, but then updated using feedback provided by the mobile.

225.

Shad does not say, and the experts were agreed that the skilled person would be unable to tell, what overall benefit would be achieved by implementing Shad in a real system.

Obviousness over Shad

226.

Difference between Shad and claim 10. It is common ground that the only difference between Shad and claim 10 is that Shad does not disclose that the values of the ACK and NACK gains are indicated to the MS by a message sent from the BS. Instead, section 4 of Shad suggests that the MS sets the value of the ACK and NACK gains itself, and scales them to take account of fading and power control.

227.

Primary evidence. Philips contends that, having read Shad with interest, the skilled person would put it on one side and would not consider that it was worth taking forwards, whether generally or in the specific context of UMTS. Mr Edwards’ evidence was that the skilled person would be sceptical as to whether the theoretical 2 dB benefit compared to the MAP detector (even if that was taken at face value) would translate into a real-world system benefit, and therefore would not pursue Shad, for six reasons which I will consider in turn.

228.

First, Shad does not take into account the real-world issues of uncompensated fading or imperfect power control, and thus the values in his table will not be optimum in a real system. As Mr Edwards himself pointed out, however, Shad expressly acknowledges

this very problem in the first sentence of section 4 and Shad’s proposed implementation addresses it by gathering statistics at the MS and applying scaling to make corrections.

229.

Secondly, Shad’s assumed target error rates for false ACK and false NACK are not justified and the skilled person may not know if they are appropriate for a real system. Shad makes clear, however, that they are examples chosen to illustrate the benefits of Shad’s proposal. The skilled person could readily repeat the algorithm using different error rates. The evidence of both Mr Edwards and Mr Gould was that 10-6was on the low side (it equates to a single false ACK in over 33 minutes of continuous data). It was suggested to Mr Gould that a less stringent requirement of 10-4 would yield less of a benefit because there would be a lower SNR target for the optimal detector, but the same would be true of the MAP detector. Nor was it suggested that this would affect the benefits at high and low p values.

230.

Thirdly, the system benefit would depend on how significant the ACK/ NACK power was compared to the overall uplink power. As noted above, the benefit calculated by Shad is significantly more than 2 dB at the extremes of p. Moreover, as Mr Edwards accepted, the skilled person would have been aware that the power saving from soft handover in UMTS was around 2 dB, so 2 dB would have been regarded as material. Consistently with this, when Philips proposed differential gains on ACKs and NACKs to RAN WG1, it suggested that the benefit would be 0.8 dB when not in soft handover and 2.3 dB in soft handover.

231.

Fourthly, a difference of 2 dB may or may not be material depending on the acknowledgement channel power and its importance relative to other uplink channels in operation. But the answers given to the third reason also apply here.

232.

Fifthly, the utility of Shad’s approach would depend on whether the concern was with the average or peak power. Shad’s proposal only assists with reducing average power, but would be likely to increase peak power in the ACK/NACK signals. As Mr Edwards accepted, however, the Patent does not explain how to deal with the peak power issue. The obvious way to solve the problem both in Shad and in the Patent would be to ensure that, if an MS could not transmit at a given power, it should scale the power of its transmissions down until it could, as proposed in UMTS Release 4. This is indeed how the issue was dealt with in relation to ACK/NACK signalling in Release 5.

233.

Sixthly, Shad’s proposal is directed to a system in which an ACK or a NACK is always sent within a known time window. Shad does not address a system such as HSDPA where an ACK or NACK is not always sent (i.e. there is a DTX condition). If the skilled person considered this, however, they would realise that in poor downlink conditions, Shad would set the decision threshold such that it would classify a DTX as a NACK, which is the desired response. Only if the channel conditions were very good (p=0.8 or 0.9) would a DTX be mistaken for a ACK. But if the channel conditions are very good, the chances of a DTX are very low. When this point was put to Mr Edwards, he agreed that “it is not of a big concern”, although he did not accept that it would be of no concern.

234.

Accordingly, I conclude that the skilled person would not simply put Shad to one side. Indeed, as I shall explain in a moment, I think that, at one point in his cross-examination, Mr Edwards accepted this.

235.

On the other hand, however, it is important to appreciate that it is not the Defendants’ case that the skilled person would proceed to implement Shad in the manner proposed in section 4 of Shad. Rather, the Defendants’ case in a nutshell is that the skilled person who was engaged in developing HARQ for HSPDA in UMTS and who was shown Shad would see the potential benefits of applying differential gains to the ACKs and NACKs as proposed by Shad in that context, but would perceive problems in implementing Shad as proposed in section 4 and would realise that an obvious alternative way in which to implement Shad would be for the BS to set the gains.

236.

It follows that, although I do not consider the skilled person would simply put Shad to one side, Mr Edwards’ first point remains relevant to the Defendants’ obviousness case.

237.

As noted above, I think that Mr Edwards accepted the first part of this case:

“Q. I am not sure suggesting the skilled person would not read on, what I am putting to you is the skilled person is not going to be interested in implementing 1XTREME; I suggest to you he is interested in taking the concepts of Shad and considering how he can implement them in HSDPA?

A. Yes, and I think they would look at this and see relevance, potential relevance, to the HSDPA ACK/NACK channel, and they would read his implementation and try and follow it.”

On the other hand, it can be seen from this that Mr Edwards did not accept that implementation in the context of HSDPA would in itself lead the skilled person to do anything different to what Shad taught. Nor did Mr Gould’s evidence go quite that far.

238.

The key question, therefore, is whether it would have been obvious to the skilled person that, rather than simply trying to implement Shad in the manner it proposes, an alternative way in which to implement Shad would be for the BS to signal the gains to be applied to the ACKs and NACKs to the MS.

239.

The experts agreed that, assuming the skilled person did not put Shad on one side, the skilled person would appreciate that Shad could be implemented in the manner suggested in section 4 of Shad by generating the optimised values of k, l and z in advance and storing the values in one or two look-up tables at the MS and/or BS. If the look-up table for k and l was stored at the MS, the MS could count the number of ACKs and NACKs it sent to determine p, and hence the ratio of k to l, and count the number of duplicate and missing frames to determine pfack and pfnack in order to scale the values of k and l obtained from its look-up table. Mr Gould’s opinion in his first report was that, in this approach to implementation, the skilled person would arrange for the MS to store z in its look-up table and signal the optimal value of z to the BS. Mr Edwards disagreed with this on the basis that the BS was able to estimate p itself from the number of ACKs and NACKs it received, and thereby find z from a look-up table stored at the BS, which would avoid the need for z to be signalled from the MS, although he agreed that the MS would feedback some indication of channel quality (i.e. a measurement report). Mr Gould agreed in his second and third reports that that was an option. Mr Gould also agreed that unnecessary signalling would be avoided. The conclusion I draw from this evidence is that the skilled person would not propose implementing Shad by means of a single look-up table containing k, l and z in the MS.

240.

Mr Gould expressed the opinion in his reports that the skilled person would perceive disadvantages in having a look-up table for k and l in the MS and would appreciate that a better option would be to have a single look-up table (or rather, a set of look-up tables) in the BS, enabling the BS to derive values of k, l and z and to signal the values of k and l to the MS.

241.

The first disadvantage identified by Mr Gould was that the MS would have to carry out measurements over a long time window to produce meaningful results for pfack and pfnack, during which time the channel quality may have changed. Mr Edwards’ response to this was that, if the skilled person thought along those lines, they would not take Shad forward at all. If they did take Shad forward, they would take Shad’s proposal to gather statistics at face value and consider a sensible time period within which to gather statistics which would allow scaling to occur. Mr Edwards suggested that a period of five seconds would be adequate since that would enable the MS to get statistics based on 2,500 ACK/NACK transmissions. Mr Gould’s rejoinder to this suggestion was that a period of five seconds would not enable the false ACK rate reliably to be determined.

242.

The second disadvantage identified by Mr Gould was that having a look-up table in the MS would make regional variants and future updates difficult. Mr Edwards’ response to this was that Mr Gould’s approach would still involve having some kind of look-up table in the MS. Mr Gould accepted that an efficient way to implement Shad with the BS controlling the power offset would be for the BS to signal the gain factors using a limited number of bits based on the number of different values to be used, rather than the actual gain factors, and for a look-up table at the MS to be used to determine the actual gain factors. This would introduce some constraint upon changing the system, although not as much as having the complete look-up table in the MS.

243.

The third disadvantage identified by Mr Gould was that a number of different look-up tables would be required to be stored by the MS to accommodate different channel conditions. This would introduce undesired complexity. Mr Edwards’ response to this was that he agreed that having multiple look-up tables would introduce undesired complexity, but he did not agree that the skilled person would consider this necessary given that Shad proposes to deal with fading and power control simply by scaling the base power level to which the optimum ratio of k to l for any given p is applied. Mr Edwards also agreed that this might mean that the benefits claimed by Shad were not fully realised in practice, but that takes one back to the question of whether the skilled person would pursue Shad at all.

244.

The conclusion which I draw from this evidence is that, while there is some force in the points made by Mr Gould, and in particular the first one, they all depend on the skilled person not simply implementing Shad’s teaching, but rather appreciating that there were potential disadvantages to the approach proposed in section 4, and therefore being led to consider an alternative way of implementing Shad’s proposal for differential gains, yet not being put off Shad altogether.

245.

Mr Edwards expressed the opinion in his reports that the skilled person would perceive disadvantages in the BS signalling k and l to the MS, and that the skilled person would need to make “a series of leaps” to get to Mr Gould’s proposed implementation of Shad.

246.

The first disadvantage identified by Mr Edwards was that it would make no sense to suggest that the BS should provide the MS with information that it already had (p or 1-

FER). Mr Gould’s response to this was that, as both experts agreed, there was no difficulty with the BS calculating p. Mr Edwards’ rejoinder was that, given that the MS had p, it could look up k and l without any signalling being required if k and l were stored in the MS.

247.

The second disadvantage identified by Mr Edwards was that it would be circular to suggest that the BS should provide the MS with information that was peculiarly within the knowledge of the MS (the statistics for determining pfack and pfnack) since this would require the MS to provide that information on the uplink. This was particularly so since the underlying goal was to provide the BS with relatively straightforward information (i.e. ACK/ NACK bits). Mr Gould disputed that the MS would need to count the number of false ACKs and NACKs and report the rates to the BS. He said that the BS could derive them from the measured SNR and/or SIR on the uplink channel.

248.

Mr Edwards’ response was that this would not occur to the skilled person and would not be viable in this context. The core reason he gave for this was that the suggestion of using SNR as a proxy for error targets was circular. Any attempt to use values of SNR as a reference must assume a channel model. The values in Shad’s table were calculated using an AWGN assumption. Scaling the power of the acknowledgements so as to obtain a measure of SNR that corresponded to that in Shad’s table would not mean that the error targets were met in practice, since the table did not reflect the actual channel. It was not feasible to create a look-up table that matched the real channel conditions; if the skilled person could create such a table, there would be no need for scaling at all.

249.

Mr Gould’s rejoinder to this was that the BS would have a detailed knowledge of the uplink radio channel in UMTS from its matched filter and channel estimator, and hence could select the most appropriate look-up table at any point in time.

250.

It seems to me that the effect of these arguments and counter-arguments is to focus attention on the question of whether Mr Gould’s proposed implementation of Shad at the BS would be obvious to the skilled person.

251.

Mr Edwards suggested that Mr Gould’s approach involved six steps, as follows:

“The Skilled Person would first need to contemplate the abandonment of the error statistics gathered by the MS, which are essential to Shad. Second, the Skilled Person would need to hit upon the idea of approximating ACK/NACK error statistics with SNR. Third, the Skilled Person would need to envisage replacing Shad’s table with a table calculated on a different basis. Fourth, the Skilled Person would need to keep going, envisaging multiple tables for different channel conditions and geographies. Fifth, the Skilled Person would need to hit upon the idea of repeating this process still further for different error targets, corresponding to different data services. Finally, the Skilled Person would need to envisage switching between look-up tables for different channel conditions, such as different levels of uncompensated fading, based on measurement of the uplink, while also monitoring the uplink as a proxy for the target error rates.”

252.

In my view this list involves an element of double-counting, and there are really only four steps, the first four. Of these, the key ones are the second and the fourth.

253.

So far as the second step is concerned, as touched on above, Mr Gould’s evidence in his reports was that the skilled person would appreciate that the BS could monitor the SNR and/or SIR on the uplink channel to monitor uplink channel quality, thereby providing an indication of the probability of false ACKs and false NACKs based on the values of SNR/SIR taken from the look-up tables. If the measured SNR/SIR went down, the probabilities of false ACKs and false NACKs would go up. The BS could then signal to the MS to increase the power on the acknowledgement signals by a gain factor. Given that the skilled person would be aware that the generally accepted method of power control for uplink channels was for the BS to control the power of the MS and that it had already been proposed in paragraph 9.1.7 of TR 25.855 that the BS could monitor the SIR from the uplink to set the power of the acknowledgement signal, it would only take a small step to arrange for the BS to signal a power gain for each of ACK and NACK.

254.

In cross-examination, Mr Gould accepted that it would require a measurement of p in order to weight the received ACKs and NACKs, but the skilled person would have to conduct an investigation in order to determine the period over which the base station would have to measure p. That time period might be five seconds (i.e. the length of time that Mr Edwards suggested might be used in Shad’s implementation), but Mr Gould just could not say: “it would be dangerous for me to suggest a number without going through that process”. As Mr Gould accepted, the accuracy of the measurement of p at the BS will be affected by the false ACK and false NACK error rates. Although he thought that this would have a relatively small impact, he had not attempted to quantify the errors associated with this system.

255.

Turning to the fourth step, Mr Gould could not say with any confidence how many look up tables would be required. Instead, he accepted that the skilled person would have to go through a development project to work out what the most important criteria were and then select the number of tables from a large number of different permutations given that Mr Gould identified at least five different parameters, each of which could take a range of different values. Mr Gould himself had not done the exercise, but he said he would “hazard a guess” that 5-10 look-up tables might suffice given that he thought that the main driver would be the rate of fading.

256.

Philips contend that, if the skilled person did embark on such a project, they would see that the values of k and l depend only on p. Mr Gould accepted that this would be the result if the step size for the SNR in Shad’s algorithm was reduced sufficiently (eventually to infinitely small steps) so as to obtain truly optimal results. This would mean that there would be no need to signal the values of k and l from the BS to the MS. Indeed, to do so would waste system resources for no benefit because the optimal values could be permanently set in the MS for each value of p.

257.

This brings me back to the point that it was Mr Gould’s evidence that the skilled person reading Shad would appreciate, in particular from the fact that k and l were not equal at p=0.5, that Shad’s system was not truly optimised, and that it could be further optimised by reducing the step size in the algorithm. Thus the logic of Mr Gould’s reading of Shad is that the skilled person would think that the way forward was further to optimise the algorithm, which would enable them to set optimal values of k and l for each value of p in the MS.

258.

Mr Gould’s answer to this point when it was put to him was to say that he agreed with Mr Edwards’ evidence that the skilled person would consider that 0.25 dB was a reasonable step size and would not think it necessary to go to a smaller step size. But that answer involves the skilled person taking Shad’s teaching at face value and following it.

259.

In my judgment the arguments on obviousness are quite finely balanced. At first blush, at least with the benefit of hindsight, it appears that implementing Shad’s proposal for differential gains on the ACKs and NACKs at the BS rather than the MS would be an obvious alternative. On the other hand, the evidence with respect to Mr Gould’s proposed approach shows that changing Shad’s implementation is less straightforward than it appears. Moreover, the logic of Mr Gould’s reading of Shad actually points in a different direction if the skilled person is minded to do anything other than simply following Shad’s teaching.

260.

Counsel for the Defendants submitted that the implementation issues did not matter, because the right question was how the skilled person would develop the HSDPA HARQ scheme after reading Shad with interest. I accept that that is a legitimate question, but Mr Gould’s evidence does not establish that the implementation issues would disappear from the skilled person’s mind on that hypothesis (let alone Mr Edwards’ evidence). Counsel for the Defendants also submitted that Philips’ case based on the implementation issues with Shad involved an inconsistency, because it was common ground that the skilled person could implement the Patent without difficulty. I do not accept this: Shad discloses differential gains for ACKs and NACKs in a highly specific context, which is Shad’s proposed optimisation algorithm. The Patent not only discloses the idea free of that context, but also discloses signalling the gains from the BS and adds the possibility of setting them independently. Furthermore, neither of these arguments meets the point that the logic of Mr Gould’s reading of Shad points in a different direction. I would add that no less than 100 paragraphs and 34 pages of the Defendants’ written closing submissions are devoted to their arguments that claim 10 is obvious over Shad, which would hardly be necessary if it was really that simple.

261.

The conclusion I have reached from the primary evidence is that I am not satisfied that claim 10 was obvious over Shad. I am reinforced in this conclusion by the secondary evidence.

262.

Secondary evidence. Qualcomm commented on Shad in 3GPP2-C30-20020307-008, a note for a conference call on 7 March 2002. In the abstract, Qualcomm summarised their evaluation of Shad as follows:

“In [Shad], a R-ACKCH approach where the ACK and NAK responses are transmitted with different powers was presented for discussion. We believe this increases the mobile station complexity with an insignificant, if any, performance improvement. So we recommend that the baseline approach using the same power levels for ACK and NAK responses be retained.”

263.

As Mr Gould accepted, it can be seen from the discussion in the note that Qualcomm were concerned with the DTX problem at p=0.8 and p=0.9 in Shad. In so far as Philips relies upon this evidence to support its case that the skilled person would not pursue Shad for the sixth reason given by Mr Edwards, I am not persuaded by this for the reasons given above. On the other hand, what this evidence does show is that Qualcomm did not think the benefits to be gained from Shad’s proposed warranted the additional complexity at the MS, particularly given the DTX problem. It evidently did not occur to Qualcomm that the gains could be signalled from the BS.

264.

Furthermore, it is notable that Motorola did not submit Shad to 3GPP, but instead proposed that in HSDPA repetition coding should be applied to a system with a biased decision threshold. This proposal was said to provide robust ARQ performance. Motorola worked on both 3GPP and 3GPP2, and there were individuals at Motorola common to both its 3GPP and 3GPP2 teams, including Robert Love.

265.

I have considered the Defendants’ reliance upon the LG and Lucent post- Priority Date contributions above. They assist the Defendants even less in this context, since there is no evidence that the authors had seen Shad.

266.

The Dutch decision. The Defendants rely on a decision dated 18 October 2017 of the District Court of The Hague holding that claim 10 was obvious over Shad, and in particular the reasoning at paragraph 4.17. The Dutch court’s decision is entitled to respect; but, as counsel for Philips pointed out, the evidence and arguments before me were rather different to those before the Dutch court. Accordingly, I have to make my decision based on the evidence and arguments before me.

267.

Conclusion. I therefore conclude that claim 10 is not obvious over Shad.

Conclusion

268.For the reasons given above, I conclude that the Patent is valid and has been infringed by the Defendants.

Koninklijke Philips NV v Asustek Computer Incorporation & Ors

[2018] EWHC 1224 (Pat)

Download options

Download this judgment as a PDF (1.3 MB)

The original format of the judgment as handed down by the court, for printing and downloading.

Download this judgment as XML

The judgment in machine-readable LegalDocML format for developers, data scientists and researchers.