Skip to Main Content

Find Case LawBeta

Judgments and decisions from 2001 onwards

Koninklijke Philips Electronics NV v Nintendo of Europe GmbH

[2014] EWHC 1959 (Pat)

HC12E04759
Neutral Citation Number: [2014] EWHC 1959 (Pat)
IN THE HIGH COURT OF JUSTICE
CHANCERY DIVISION
PATENTS COURT

Royal Courts of Justice

Strand, London, WC2A 2LL

Date: 20/06/2014

Before :

MR. JUSTICE BIRSS

Between :

KONINKLIJKE PHILIPS ELECTRONICS N.V.

Claimant

- and -

NINTENDO OF EUROPE GmbH

Defendant

Henry Carr QC & Tom Hinchliffe (instructed by Bristows) for the Claimant

Adrian Speck QC & Brian Nicholson (instructed by Rouse Legal) for the Defendant

Hearing dates: 8th, 9th, 12th, 13th, 14th, 19th 20th May 2014

Judgment

Mr Justice Birss:

Contents

Para

Introduction

1

The issues

4

The witnesses

23

The 484 patent

29

The skilled person

30

The common general knowledge

37

The 484 patent specification

58

Claim construction

68

Allowability of the amendments

106

Infringement

119

Novelty

141

WCTM

142

SEGA Heavyweight Champ

157

Alpine Racer

164

Obviousness

177

WCTM

179

SEGA Heavyweight Champ

192

Alpine Racer

194

The 498 and 650 patents

195

The skilled person

196

The common general knowledge

202

The 498/650 patent specification

226

Claim construction

238

Added matter

273

The amendments and double patenting

290

Infringement

312

Novelty

329

Wacom

330

Philips application

345

Sony

357

Obviousness

379

Wacom

381

Philips application

410

Sony

427

Summary of outcomes on 498 and 650 and impact on double patenting

440

Reflection on 498 and 650

450

Conclusion

451

Introduction

1.

In this action the claimant, Philips, contends that the Nintendo Wii computer game console infringes its patents. The first patent is EP (UK) No. 0,808,484 entitled “Method and apparatus for controlling the movement of a virtual body”. The second patent is EP (UK) No. 1,573,498 entitled “User interface system based on a pointing device”. The third patent is EP (UK) 2,093,650. The 650 patent is a divisional with respect to the 498 patent.

2.

Philips alleges that the Nintendo Wii and Wii U systems infringe all three patents. Nintendo counterclaims for revocation.

3.

Philips has applied for conditional amendments to all three Patents. It admits partial invalidity of the 498 patent in its form as granted. Nintendo takes various objections either to the granted claims or to the proposed amendments based on added matter. There is also an objection to one amendment on clarity grounds and an argument about double patenting. Nintendo does not admit that any of the Wii or Wii U systems infringe any of the claims. The invalidity arguments are summarised below.

The issues

Prior art and claim amendments for the 484 patent

4.

As against the 484 patent four prior art citations are relied on in support of allegations of lack of novelty or lack of inventive step. They are:

i)

Japanese Unexamined Utility Model Application S64-56289 filed by Sega Enterprises Co. Ltd entitled “Boxing Game Device” published on 7 April 1989;

ii)

the prior use of Sega’s arcade game “Heavyweight Champ” made available to the public from 1987. This game is the same as the one described in the Sega Application although the disclosures are not identical;

iii)

the prior use of Nintendo’s NES console when used in conjunction with a device called the Power Pad of Bandai to play a computer game called World Class Track Meet (WCTM) from 1988;

iv)

the prior use of Namco’s “Alpine Racer” arcade game. This was made available to the public from June or July 1995.

5.

Philips accepts that all these matters form part of the state of the art as regards the 484 patent.

6.

For the 484 patent the claims maintained as independently valid are: claim 1 and claim 5 as granted, claim 1 as proposed to be amended, and new claim 9 as proposed to be amended.

7.

Claim 1 of the 484 patent as granted is as follows:

1.

Virtual body modelling apparatus operable to generate and animate under user direction a representation of a body in a virtual environment the apparatus comprising:

a first data store, holding data defining the virtual environment; a second data store, holding data related to features of the virtual body representation;

user motion detection means monitoring movement of the user in a physical environment;

and processing means arranged to generate a representation of the virtual environment based on data from the first data store,

to generate the body representation within the virtual environment based on data from the second data store,

and to periodically modify the generated body representation in response to signals received from the user motion detection means;

characterised in that the second data store holds data defining at least one sequence of body motions,

and the processor is arranged to call said sequence data and modify the generated body representation such as to follow the sequence of motions on detection of one or more predetermined signals from the user motion detection means.

8.

Claim 5 of the 484 patent as granted is as follows:

5.

Apparatus as claimed in Claim 1, wherein the user is presented with the image of the virtual environment from a first viewpoint, said generated representation of the virtual environment being modified to change the viewpoint in synchronism with the following of the sequence of motions.

9.

Claim 1 of the 484 patent as proposed to be amended is as follows:

1.

Virtual body modelling apparatus operable to generate and animate under user direction a representation of a body in a virtual environment wherein the virtual body representation is a computer-based model that represents the human, or other, form in the virtual environment, the apparatus comprising:

a first data store, holding data defining the virtual environment; a second data store, holding data related to features of the virtual body representation;

user motion detection means monitoring movement of the user in a physical environment;

and processing means arranged to generate a representation of the virtual environment based on data from the first data store,

to generate the body representation within the virtual environment based on data from the second data store,

and to periodically modify the generated body representation in response to signals received from the user motion detection means and an adaptive mechanism;

characterised in that the second data store holds data defining at least one sequence of body motions,

and the processor is arranged to call said sequence data and modify the generated body representation such as to follow the sequence of motions on detection of one or more predetermined signals from the user motion detection means; and,

the adaptive mechanism is arranged to adapt on the fly to the signals received from the user motion detection means to translate the user’s erratic, variable signals into a steady motion.

10.

The proposed amendments to the 484 patent also involve inserting a new claim 9 after existing claim 8 and renumbering subsequent claims. Claim 9 as proposed to be amended is in this form:

9.

Apparatus as claimed in any preceding claim, wherein the movement of at least one part of the virtual body is directed by the measured movement of the corresponding part of the user’s body.

Prior art and claim amendments for the 498 and 650 patents

11.

The prior art relied on against the 498 and 650 patents is as follows:

i)

a Japanese unexamined patent application No. H07-302148 published on 14 November 1995 entitled “Data input device” (“Wacom”);

ii)

PCT Application WO 00/60534 published on 12 November 2000 entitled “Remote control for display apparatus” (“Philips application”); and

iii)

Japanese unexamined patent application 2002-81909 published on 22 March 2002 entitled “Position detector, position detection method, and entertainment apparatus” (“Sony”).

12.

Philips accepts that all these matters form part of the state of the art as regards the 498 and 650 patents.

13.

The position with the claims of the 498 and 650 patents is complicated. Philips accepts that claim 1 of 498 as granted is invalid. It is anticipated by Wacom. However claims 2, 3 and 5 as granted are maintained as independently valid. Moreover Philips makes two conditional applications to amend. The first conditional application involves amendments to granted claims 1, 2 and 3. The second conditional application only involves an amendment to claim 1 as granted and relates to a particular point on added matter. In fact Philips also has a third and a fourth conditional proposed amendment. The third one is to put together the first and second amendments. The fourth one relates to a typographical error in granted claim 1.

14.

Nintendo did not object to the informal way in which these two latter matters were dealt with but in future patentees should not adopt this course. It is likely to lead to mistakes being made. No doubt, particularly regarding the combination of the first and second amendments the patentee did not want to make it look as though there were too many alternative amendments being proposed. That is not an excuse for not setting out with precision what the terms of claims sought actually are. I very much doubt the EPO would have permitted the patentee to seek the third or fourth conditional proposed amendments without having them written out formally. Indeed the fourth conditional amendment really could be converted into three different claims sets as well, since the alteration applies to claim 1 in any of its three proposed forms.

15.

At one stage Philips sought to amend claim 6 of the 498 patent and to insert new claims 8 and 9 but those amendments were not pressed at trial since those claims were not said to be independently valid.

16.

Claims 1, 2, 3 and 5 of 498 as granted are as follows:

Claim 1

User interaction system, comprising:

- an electrical apparatus (110);

- a portable pointing device (101, 300) operable by a user for pointing to a region in space;

- a camera (102) taking a picture; and

- a digital signal processor (120), capable of receiving and processing the picture, and capable of transmitting user interface information (I) derived from the picture to the electrical apparatus (110),

wherein the camera (102) is connected to the pointing device (101, 300) so that in operation it images the region pointed to, the system being characterised in that it further comprises at least one room localization beacon (180, 181, 182 the system being), in a room wherein the pointing device is used, that can emit electromagnetic radiation, for use by the digital signal processor (120) in order to recognise which part of the room the pointing device is pointing; and the digital signal processor (120) is further arranged to recognise to which part of the room the pointing device is pointing.

Claim 2

The user interaction system as in Claim 1, further comprising motion sensing means (304) for sensing a motion and/or for calculating a motion trajectory (400, 410) of the pointing device.

Claim 3

The user interaction system as in Claim 1 wherein the motion or the motion trajectory (400, 410) of the pointing device is estimated on the basis of successive pictures imaged by the camera (102).

Claim 5

User interaction system as claimed in Claim 2, wherein the transmitted user interface information (I) includes at least one feature selected from the group consisting of motion trajectory (400) the pointing device (101) and a characteristic signature derived from the motion trajectory (400) of the pointing device (101). ”

17.

Claims 1, 2 and 3 in the form of the first conditional amendment are set out below. I will refer to them as claims 1A, 2A and 3A. They are as follows:

Claim 1A

User interaction system, comprising:

- an electrical apparatus (110);

- a portable pointing device (101, 300) operable by a user for pointing to a region in space;

- a camera (102) taking a picture;

- motion sensing means, and

- a digital signal processor (120), capable of receiving and processing the picture, and capable of transmitting user interface information (I) derived from the picture to the electrical apparatus (110),

wherein the camera (102) is connected to the pointing device (101, 300) so that in operation it images the region pointed to, the system being characterised in that it further comprises at least one room localization beacon (180, 181, 182 the system being), in a room wherein the pointing device is used, that can emit electromagnetic radiation, for use by the digital signal processor (120) in order to recognise which part of the room the pointing device is pointing; and the digital signal processor (120) is further arranged to recognise to which part of the room the pointing device is pointing

and wherein the digital signal processor is arranged to analyse gestures made with the pointing device based upon a motion trajectory (400, 410) of the pointing device.

Claim 2A

The user interaction system as in Claim 1, further comprising motion sensing means (304) for sensing a motion and/or for calculating awherein the motion trajectory (400, 410) of the pointing device is estimated on the basis of the motion sensing means.

Claim 3A

The user interaction system as in Claim 1 wherein the motion or the motion trajectory (400, 410) of the pointing device is estimated on the basis of successive pictures imaged by the camera (102).

18.

Claim 1 of 498 according to the second conditional amendment is set out below. I will refer to it as claim 1B. It is as follows:

Claim 1B

User interaction system, comprising:

- an electrical apparatus (110);

- a portable pointing device (101, 300) operable by a user for pointing to a region in space;

- a camera (102) taking a picture;

- a digital signal processor (120), capable of receiving and processing the picture, and capable of transmitting user interface information (I) derived from the picture to the electrical apparatus (110),

wherein the camera (102) is connected to the pointing device (101, 300) so that in operation it images the region pointed to, the system being characterised in that it further comprises at least one room localization beacons (180, 181, 182 the system being), in a room wherein the pointing device is used, that can emit electromagnetic radiation, for use by the digital signal processor (120) in order to recognise which part of the room the pointing device is pointing; and the digital signal processor (120) is further arranged to recognise to which part of the room the pointing device is pointing.

19.

Claim 1 in the form of the third (informal) conditional amendment is set out below. I will refer to it as claim 1C. It is:

Claim 1C

User interaction system, comprising:

- an electrical apparatus (110);

- a portable pointing device (101, 300) operable by a user for pointing to a region in space;

- a camera (102) taking a picture;

- motion sensing means, and

- a digital signal processor (120), capable of receiving and processing the picture, and capable of transmitting user interface information (I) derived from the picture to the electrical apparatus (110),

wherein the camera (102) is connected to the pointing device (101, 300) so that in operation it images the region pointed to, the system being characterised in that it further comprises at least one room localization beacons (180, 181, 182 the system being), in a room wherein the pointing device is used, that can emit electromagnetic radiation, for use by the digital signal processor (120) in order to recognise which part of the room the pointing device is pointing; and the digital signal processor (120) is further arranged to recognise to which part of the room the pointing device is pointing

and wherein the digital signal processor is arranged to analyse gestures made with the pointing device based upon a motion trajectory (400, 410) of the pointing device.

20.

Claim 1 of what I have called the fourth conditional amendment is set out below. I will refer to it as claim 1D. It is as follows:

Claim 1D

User interaction system, comprising:

- an electrical apparatus (110);

- a portable pointing device (101, 300) operable by a user for pointing to a region in space;

- a camera (102) taking a picture;

- motion sensing means, and

- a digital signal processor (120), capable of receiving and processing the picture, and capable of transmitting user interface information (I) derived from the picture to the electrical apparatus (110),

wherein the camera (102) is connected to the pointing device (101, 300) so that in operation it images the region pointed to, the system being characterised in that it further comprises at least one room localization beacons (180, 181, 182), the system being), in a room wherein the pointing device is used, that can emit electromagnetic radiation, for use by the digital signal processor (120) in order to recognise which part of the room the pointing device is pointing; and the digital signal processor (120) is further arranged to recognise to which part of the room the pointing device is pointing

and wherein the digital signal processor is arranged to analyse gestures made with the pointing device based upon a motion trajectory (400, 410) of the pointing device.

21.

For the 650 patent claims 1, 2, 3 and 6 are all in issue. They are as follows:

1.

User interaction system comprising:

- an electrical apparatus (110);

- a portable pointing device (101, 300) operable by a user for pointing to a region in space;

- a camera (102) taking a picture, which camera is connected to the pointing device so that in operation it images the region pointed to; and

- a digital signal processor (120), capable of receiving and processing the picture, and capable of transmitting user interface information derived from the picture to the electrical apparatus;

the system being characterised by further comprising:

-

at least one room localization beacon (180,181,182) in a room wherein the pointing devices is used, that is capable of emitting electromagnetic radiation for use by the digital signal processor which is arranged to recognize to which part of the room the pointing device is pointing; and

-

means for estimating a motion or motion or a motion trajectory (400, 410) of the pointing device.

2.

User interaction system as in claim 1, wherein the means for enabling estimating a motion or a motion trajectory of the pointing device is motion sensing means (304).

3.

User interaction system as in claim 1, wherein the motion or the motion trajectory of the pointing device is estimated on basis of successive pictures imaged by the camera at respective instances of time.

6.

User interaction system as claimed in any preceding claim, wherein the digital signal processor is arranged to analyze gestures made with the pointing device based on said motion trajectory.”

22.

Claim 1 of 650 as proposed to be amended is as follows:

1.

User interaction system comprising:

- an electrical apparatus (110);

- a portable pointing device (101, 300) operable by a user for pointing to a region in space;

- a camera (102) taking a picture, which camera is connected to the pointing device so that in operation it images the region pointed to; and

- a digital signal processor (120), capable of receiving and processing the picture, and capable of transmitting user interface information derived from the picture to the electrical apparatus;

the system being characterised by further comprising:

-

at least one room localization beacons (180,181,182) in a room wherein the pointing devices is used, that areis capable of emitting electromagnetic radiation for use by the digital signal processor which is arranged to recognize to which part of the room the pointing device is pointing; and

-

means for estimating a motion or motion or a motion trajectory (400, 410) of the pointing device.

The witnesses

23.

In relation to the 484 patent, Philips called Professor Trevor Darrell. He is a Professor in Residence in the University of California at Berkeley in the Electrical Engineering and Computer Science Department. At the priority date he was completing his PhD studies at the Media Lab at Massachusetts Institute of Technology and has significant practical experience in virtual reality specifically in the development of computer vision for human interaction with virtual environments. He leads a research group with over a dozen graduate students and post-doctoral researchers and runs several large research projects for the US National Science Foundation and the US Defense Advanced Research Projects Agency (DARPA).

24.

In relation to the 498 and 650 patents, Philips relied on the expert evidence of Professor Ian Reid. He works at the Australian Centre for Visual Technologies, School of Computer Science at the University of Adelaide. He has over 25 years experience in the field of computer vision and its applications, including human computer interaction. In particular he has experience in the visual tracking of targets and tracking of position and motion of cameras using the video data acquired by the camera. Before moving to the University of Adelaide Prof Reid was at Oxford University from 1991 until 2012, rising from a post-doctoral researcher to a professor in 2010. His work included work which showed that in Geoff Hurst’s controversial goal in the 1966 World Cup Football Final the whole of the ball did not cross the whole of the line.

25.

For all three patents, Nintendo relied on the expert evidence of Professor Anthony Steed. He is a Professor in the Virtual Environment and Comptuer Graphics department of University College London (UCL). Prof Steed undertook his PhD in the mid 1990s at Queen Mary & Westfield College London and moved to UCL as a post-doctoral researcher in 1996. He rose to become a professor in 2009. Prof Steed has works in the field of virtual environments and computer graphics. He has also worked on 3D computer graphics as a hobby, writing his first computer game at the age of 11 on a Sinclair ZX81 and ZX Spectrum.

26.

Philips submitted that as a result of the way he had been instructed and the way he was brought into the case, Prof Steed’s evidence exhibited classic hindsight. Having listened carefully to Prof Steed’s testimony and read his reports, I do not accept that the way the witness was instructed has any real bearing on the weight on which I may or may not place on his evidence. Patent cases are necessarily conducted ex post facto and I refer to what I said in HTC v Gemalto [2013] EWHC 1876 (Pat) at paragraphs 271-275.

27.

Philips submitted that “the technical background section was developed with the patent in mind” as if this was a point against the expert’s evidence. It is not. There is little point in an explanation of the technical background unless it has the patent in mind. No doubt if there is a particular element which can be shown to involve hindsight reasoning then it is open to a party to identify it and the court can take that into account, but as a blanket observation it is not significant.

28.

In my judgment all three experts were good witnesses, seeking to help the court with their answers and to explain the technology both in their oral testimony and their reports. I am grateful to each of them for their evidence.

The 484 patent

29.

The application for the 484 patent was filed on 14th November 1996 claiming priority from an application on 7th December 1995. It was granted on 31st March 2004. The patent relates to modelling a virtual body in a virtual environment.

The person skilled in the art

30.

Nintendo submitted that the person skilled in the art relevant to the 484 patent is a team with four main skill sets described by Professor Steed. The four skill sets are (a) interactive computer graphics, (b) human-computer interaction, (c) electronic engineering and (d) product design. Professor Steed explained that the 484 patent is primarily directed to (a) and (b).

31.

Philips, on the other hand, referred to Professor Darrell’s formulation of the identity of the skilled person. This was based on the fact that the patent is directed to the area of interactive virtual environments. Accordingly the skilled person would be someone experienced in developing interactive environments. They would be a computer science graduate with two years further academic study or research experience. There was a small debate about the question of how many years experience a person or team would have but the issue was unimportant.

32.

The real dispute between the parties relates to computer games. Professor Darrell did not regard computer games as within his formulation of the identity of the skilled person. Philips submitted that since the patent was not focussed on computer games the skilled person was not someone interested in computer games. Nintendo contended that the field of interest of the skilled person would include computer gaming which was one of the biggest technical applications of virtual reality. It pointed out that the Wii, which is alleged to infringe the patent, is a game system.

33.

It has been well understood in English law for over a century (see Gillette Safety Razor v Anglo American(1913) 30 RPC 465) that those working in industry are entitled to be sure that they are not infringing any valid patent if all they do is make obvious improvements to the prior art. If without an act of invention a person skilled in the art in 1995 would have produced a computer game system which falls within the patent then the patent lacks inventive step because the claims cover something which is obvious. It makes no difference to this conclusion if something else within the wide limits of the claim was not obvious when considered from the point of view of the developers of sophisticated virtual reality systems which were being investigated in universities. There clearly were real people or teams working on games development at the relevant time with the skills described by Prof Steed. Nintendo’s definition of the skilled person, at least from the point of view of obviousness, is a legitimate one. This is the same as or closely related to the point made by Laddie J in Inhale v Quadrant [2002] RPC 21 at paragraph 42.

34.

A distinct question is whether the person skilled in the art from the point of view of considering the true interpretation of the patent and how to put it into practice is the same notional person as the person skilled in the art from the point of view of assessing obviousness. In Schlumberger v Electromagnetic Geoservices [2010] EWCA Civ 819 the Court of Appeal held that the two did not necessarily have to be the same. The case before me is different in that it is not concerned with whether it was obvious to combine skills from different fields, it is concerned with a wide claim which covers things in at least two distinct fields. Just as there were real teams of the kind described by Prof Steed, so too there were real teams of the kind described by Prof Darrell. They worked on interactive virtual environments and were not concerned with games.

35.

There is no dispute that the claims of 484 covers systems used for games and I can see that it would be of interest to a games developer but I am not convinced that as a document it is really directed to a games developer. Construction is an exercise in determining what a person skilled in the art would understand the inventor to have meant by the language they have used (Kirin Amgen). To interpret a patent correctly involves reading it from the point of view of the person to whom it was directed with their common general knowledge. Otherwise the wrong conclusion might be reached in relation to construction, sufficiency or added matter. Two of these three issues are live in this case.

36.

On the facts of this case I do not believe it makes any difference whether the skilled person to whom the patent is addressed is a games developer or not. Since the patent covers games I will accept Prof Steed’s formulation for both purposes. However in order to distinguish between the two in case this point goes further, I will refer to the notional person to whom the patent is addressed as the skilled addressee and notional person when considering obviousness as the skilled games system developer.

Common general knowledge

37.

The important areas of common general knowledge relate to interactive computer graphics and human-computer interactions.

38.

Interactive computer graphics relate to all types of real time systems which involve the creation of computer graphics based on user interaction. Examples of interactive computer graphic systems are virtual reality systems, data visualisation, air traffic control displays and other control displays such as for a cockpit. A distinction is drawn sometimes between virtual reality systems and other interactive computer graphics.

39.

Professor Steed said, and I accept, that virtual reality systems try to create a “virtual environment” in other words they try to mimic a real situation or place. Elements that make up the virtual environment respond to user inputs.

40.

There are two main types of virtual reality system: immersive and non-immersive. In an immersive virtual reality system the virtual environment is displayed as if it surrounds the user, whereas a non-immersive virtual reality system refers to types of virtual reality systems where the user is not surrounded by displays but instead observes a display. A normal desktop computer screen is a non-immersive type system. However there are degrees of immersiveness. In as much as they present a virtual environment at all, home console computer games and most arcade computer games are examples of non-immersive virtual reality systems.

41.

Professor Darrell focussed on the field of immersive virtual reality. This involves special hardware such as a head mounted display. Head mounted displays included goggles worn by the user which placed a display screen in front of each eye. Another kind of special hardware was the CAVE system which surrounded a user with several walls or rear projected video screens and updated the images based on the position of the user inside the environment. At the MIT Media Lab, Professor Darrell and his colleagues developed the ALIVE system which presented a single large screen video display to a participant in the virtual world.

42.

The term “avatar” is used to refer to the version of the real person which exists in the virtual environment. The images can be presented to the user as they would be seen through the eyes of the avatar. This is called a first person viewpoint. With a third person viewpoint, the images depict the user’s avatar. The ALIVE system presented a third person viewpoint.

43.

Virtual reality systems prior to 1995 used a number of tracking systems to estimate the position of a user in an environment. Examples included magnetic sensors which allowed the computer to sense how a person’s limbs were moving and the data glove, a device for measuring the position of the hand.

44.

Methods of allowing the avatar to navigate inside the virtual world included the user pointing in the direction in which they wanted to travel, operating a joystick or pressing their foot on a pressure pad to indicate direction.

45.

All the matters I have described so far would be part of the common general knowledge of the skilled person whether they were based on Prof Darrell’s formulation or based on Prof Steed’s formulation.

46.

I will turn to consider specific items of common general knowledge which were dealt with by Prof Steed. These matters were well known to the skilled games system developer in 1995.

47.

Professor Steed’s evidence focussed on the work on interactive computer graphics and human computer interaction relating to computer games at the relevant time. He explained that a typical workstation or home computer in 1995 had most of the components required for an interactive computer graphics system. The control would be exercised by the main CPU and memory, the graphics will be handed by a graphics card and the audio would be handled by an audio card on the computer. In 1995 a home video games console would be less powerful than a typical home computer of the same era but would have the same basic architecture.

48.

Professor Steed summarised the history of computer graphics and displays and sought to classify them in three classes. Class 1 were custom image generators which were common in the 1970s to the mid 1980s. One approach was to use bitmapping. The display was divided into pixels and the state of each pixel defined by a number. (1 or 0 for black or white and other numbers for colours). The picture is defined by a bitmap which records the value for each pixel. A sprite is a small bitmap that represents a shape or figure. By changing the location of the sprite, the figure moves on the screen. An alternative approach was to use vector graphics. Here the image is defined by line segments. An object might appear as a polygon, i.e. a group of line segments which all link together. The well known 1970s computer games Space Invaders and Asteroids are examples of early bit mapping and vector graphics respectively.

49.

From the 1980s onwards general purpose sprite based graphics hardware was produced which Prof Steed defined as Class 2. This class included many of the earlier video games consoles such as the Nintendo NES System and home computers such as the BBC Micro.

50.

Class 1 and Class 2 systems generally presented a two dimensional image but a three dimensional effect could be created. An early example was the seminal 3D computer game Elite from 1982. A later example of a sprite based 2D system which creates a 3D effect was the game Doom in 1993. This was a well known game at the time.

51.

Doom was a violent game in which the player tries to kill other people and creatures and avoid being killed. It is possible to have a multiplayer version of the game in which avatars of other people are shown in the game. Movement in the game is caused simply by pressing a key on the keyboard to move the avatar forwards. The graphics do not attempt to animate or represent the player’s own avatar other than to depict the weapon being held by the avatar. The viewpoint is in effect looking out from the avatar’s eyes:

52.

As the player advances in Doom the image bobs up and down to represent the result of the head of the avatar moving as it walks forward.

53.

The main disadvantage of sprite based graphics in Class 1 and Class 2 image generators is that each graphic element is stored as a bitmap. This is inefficient for large objects. Vector graphics can potentially store complex graphics more efficiently and are naturally suited to the description of 3D objects. 3D vector graphics displays objects and scenes in a three dimensional coordinate system. An object defined in this way with line segments and faces is called a mesh object.

54.

Prof Steed defined his Class 3 image generators as hardware 3D engines. They arose in arcade games in the very late 1980s and were more widely available in the early 1990s. In the mid 1990s hardware capabilities were improving rapidly. In 1994 a 3D rendering technique called texture mapping became available. This takes a bitmap and copies the pixels onto the surface of a polygon to create more realistic looking surfaces. In Class 3 the Professor placed the Sony Playstation (1994) games console.

55.

The engine for rendering 3D graphics follows five stages known as the 3D graphics pipeline. The first step is scene assembly which constructs the 3D geometry of the scene. It also specifies the virtual camera that will observe the screen. The next step is 3D projection and lighting. This takes the scene and the virtual camera and makes a 2D projection to work out what the scene would look like through the virtual camera. The image displayed on the user’s display will be the image as if the user was looking through the lens of the virtual camera. Once this step has been completed the remaining steps in the pipeline produce a video signal to drive the display. Prof Steed explained that most 3D graphics engines follow these stages in this order.

56.

The positioning of the virtual camera is determined by the viewpoint the game designer wishes to employ. Games tended to use one of three viewpoints: a first person viewpoint, a first person over the shoulder viewpoint or a third person viewpoint. The first and last have been discussed already. The first person over the shoulder viewpoint was common in gaming. The player’s view is attached to their avatar so that they see the environment in a way and from a viewpoint that approximates to that which the avatar would see but the viewpoint is fixed behind the avatar so that the player can see their avatar as part of the scene. An example was the well known car racing game called Virtua Racer from 1992. It used a full 3D engine. The scene showed the car being driven as well as the track.

57.

All of the matters I have described formed part of the common general knowledge of the skilled games system developer.

The specification of the 484 patent

58.

Paragraph 1 explains that the invention relates to a method and apparatus for controlling the movement of a virtual body in a computer generated virtual environment. The virtual body referred to is a computer based model that represents the human or other form. The term “virtual” is used to distinguish from the physical or real world as is explained in paragraph 2. Examples of virtual environment include the interior of a building for an architectural modelling application or an urban or surreal landscape for a game or other application. The point is that the virtual body is controlled by the user to move around the environment.

59.

The objects of the invention are in paragraphs 8 and 9. The point of the invention is to control the movements of a virtual body in a virtual environment in response to the user’s body movement in a way which is relatively simple to implement while providing acceptable or better levels of realism (paragraph 8). Paragraph 9 of the patent refers to feedback.

60.

The specification contemplates a situation in which the user is walking in the real physical world, or at least making a walking motion, and there is a desire to simulate the user’s walk in the virtual environment. Instead of following the user’s motion in full detail and trying to model it exactly, paragraph 11 explains that by using pre-stored sequences of body motions, the need to monitor user’s movements and update the generated image of the virtual body to exactly follow the user’s execution of these movements is greatly reduced. This is the core idea in the patent.

61.

Paragraph 12 then proposes that a form of visual feedback may be provided. An image of the user’s viewpoint inside the virtual environment is provided and the feedback is created by modifying the image to change the viewpoint in synchronism as the user walks. In other words as the user’s head would move up and down as they walk, the virtual image also moves up and down.

62.

Paragraph 14 refers to determining the rate of modification of the generated body representation. The idea is that that rate can be set for example by a time-averaged value for the speed of the user’s walk. The result will show a smooth movement of the virtual body unaffected by short hesitations.

63.

The examples given in the patent are based on a system which uses a foot axial motion measurement device. An example is in Fig. 4:

64.

This device allows the movement of the feet of the user to be measured by the computer in order that a walking motion can be represented in the computer image. Two methods of simulating walking motion are described. First in paragraphs 21-22 is a mathematical modelling approach based on inverse kinematics deriving the position of the limbs and body from knowledge of the position of the feet. Second in paragraph 23 is an approach based on cycling through a series of stored sequences, possibly with interpolation between frames to give a smoother walking action.

65.

Two methods of translating between the physical measurements and the action in the virtual world are described. One (paragraph 24-25) is to directly map the measured or derived position of the human legs onto the representation of the virtual legs. However, a possible problem with this lies in the action of individual users when exposed to the measurements. In order to deal with this one proposal is that the mechanism adapts on the fly to the measurement apparatus output in order to translate the user’s erratic variable measurements into a steady walking motion. Any number of adaptive mechanisms may be used and examples are mentioned of an adaptive filter or a neural network. The physical movement corresponding to putting a particular foot on the ground may be used as a key with the virtual modelling taking its timing from the key regardless of what irregular motions from the user’s legs occur between the keys.

66.

The second method of translating between the physical measurements and the action in the virtual world is mentioned in paragraph 26. This involves using pre-stored sequences of animations to represent the walking motion calculated from the speed that the real walker is walking.

67.

Viewpoint modulation is referred to in paragraph 31. This mentions the idea of giving the appearance of up-down-or sideways movement of the field of view as would be experienced when walking in the real world.

Claim construction

Claim 1 as granted

68.

In addition to an overarching point about the interpretation of claims to computer apparatus generally, a number of detailed issues of construction arose in relation to claim 1 as granted. They are:

i)

Virtual environment (etc.)

ii)

First and second data stores

iii)

User motion detection means monitoring movement

iv)

Sequence of body motions

v)

Pre-determined signals and periodically modify

Virtual environment (etc.)

69.

The terms “virtual environment”, “virtual body modelling apparatus” and “virtual body representation” were in dispute. The third expression is in claim 1 as proposed to be amended but all three terms clearly should be construed in the same sense and it is convenient to focus on “virtual environment”. Nintendo submitted that it included an environment depicted on a computer screen using 2D sprite based graphics. Philips did not agree and submitted that the virtual environment called for by the claims was a 3D environment in which a 3D model of relevant body could navigate. Professor Steed’s C.V. describes a virtual environment as a “real time interactive three dimensional model that is a simulation of a real or imagined placed”. I find that this is how the skilled addressee would understand the patent specification and the claims. That understanding would also be supported by the various items of prior art referred to in the specification which are consistent with the idea that the virtual environment is a three dimensional environment.

70.

I suppose strictly it might be possible in principle for there to be a computer model of a 3D environment which is then depicted on the screen with 2D sprite based graphics. However that would not be within the claim since the claim also requires the apparatus to generate and animate the representation of the body in the virtual environment.

71.

Nintendo perceived that Philips was arguing that claim 1 included an implicit requirement as to the quality of the claimed apparatus in terms of the resolution of the graphics and the immersive nature in the environment. Nintendo submitted there was no such limitation in the claims. I agree with Nintendo. I have construed the term virtual environment above and virtual body modelling apparatus clearly relates to that environment but there is no requirement that the system must use a particular graphics resolution. A poor and unconvincing virtual environment would still infringe.

First and second data stores

72.

The claim refers to first and second data stores. These need to be distinct but they need not be physically distinct as long as they are logically distinct.

User motion detection means monitoring movement

73.

The claim requires a user motion detection means monitoring the movement of a user in the physical environment. Philips submitted that this requires the ability to measure movement, which I accept. Philips also argued that this therefore requires the measurement of a range of values and the word “monitoring” shows that a range of values must be measured. Philips referred to the description of a potentiometer in the examples as a way of measuring the movement of the user’s feet. The key submission (with an eye on the prior art) is that a switch which is merely turned on or off is not monitoring the movement of the user.

74.

I reject Philips’ submission that the claim would be understood as requiring the measurement of a range or that a switch would be outside the claim. The words of the claim and the patent itself are entirely general. No limits are placed on how the measurements are to be taken either in the claim or in the specification. The reader would not see any reason either from the language used or technically why monitoring had to be continuous. A system which uses a switch or switches on which the user steps and which allow the system to detect the speed at which a user is stepping, is a means for detecting the movement of a user and is monitoring their movement.

75.

The specification (paragraph 25) discusses using the movement corresponding to putting a foot on the ground as a key from which to time the motion of the virtual body, regardless of irregular motions between the keys. In such a case the reader would not think there was any reason why those irregular motions between keys had to be measured at all.

Sequence of body motions

76.

The claim refers to a sequence of body motions and later to a sequence of motions. Philips submitted this referred to a sequence of motions in a particular order which could be cycled through repeatedly. Thus it referred to things like walking and waving, which are mentioned in the specification. Nintendo argued that the claim was not limited to cases in which the body motions were repetitive.

77.

I prefer Philips’ interpretation for three reasons. First it is consistent with the language of the specification as a whole. There was a suggestion that the reference to movements of the arms and head in paragraph 16 showed that the movements need not be repetitive because arms and the head do not necessarily move repetitively. This does not help however because both are capable of moving in a repetitive way.

78.

Second it makes sense of the operation of the adaptive mechanism described. Nintendo argued that it was not clear how the adaptive mechanism was supposed to operate. This was advanced as part of a lack of clarity objection to the amendments which sought to introduce that feature into claim 1. The essential difficulty described by Nintendo comes down to a problem with making sense of the disclosure of the adaptive mechanism for a case in which the body motions are not repetitive in nature. I agree that the description of the adaptive mechanism in the specification does not make a lot of sense if one is reading it imagining a case in which the motions concerned are not repetitive in nature.

79.

Third Philips’ construction fits with a careful consideration of the language used. The claim does not simply require body motions, it requires a sequence of body motions. This is different from simply requiring a sequence of body poses or body images and is apt to describe modelling something like walking or waving.

Pre-determined signals and periodically modify

80.

The claim requires processing means arranged to generate a representation of a virtual environment and generate the body representation within the virtual environment. The generated body representation must be periodically modified in response to signals received from the user motion detection means. Thus argues Philips the modification of the avatar must be periodic.

81.

The characterising portion of claim 1 relates to the second data store defining at least one sequence of body motions. The processor is arranged to call that data to modify the body representation so as to follow the sequence of motions on the detection of one of more pre-determined signals from the motion detection means.

82.

Philips submitted that the reference to “pre-determined” signals was limited in the sense that it required the system to use only a subset of the signals available from the user motion detection means and not to use all information from the user motion detection means to drive the animation. It submitted that this feature had to be read in conjunction with the feature of periodically modifying the generated body representation. It is consistent with the discussion of using keys in paragraph 25 of the specification.

83.

Nintendo did not agree with these submissions but Philips argued that its construction was supported by Nintendo’s own opening skeleton at paragraph 91. Here Nintendo distinguished between “signals received from the user motion detection means” used in the pre-characterising part of claim 1 and “on detection of one or more pre-determined signals from the user motion detection means” in the characterising part, arguing that the former referred to the continuous flow of information from the motion detector and the latter referred to events detected by analysing the continuous signals for particular characteristics.

84.

I reject Philips’ construction. As Nintendo’s opening skeleton explained the claim talks about the processor detecting one or more pre-determined signals. I accept that a system which did use a subset of the signals from the motion detector would fall within the claim but I can see no justification for reading in a limitation that the pre-determined signals are necessarily a limited subset from a wider class. They may be but do not have to be.

Claim 5

85.

Claim 5 relates to the modification of a virtual environment to change the viewpoint in synchronism with the following of the sequence of motions. There was no real debate about the meaning of this claim but it is worth noting the impact here of my conclusion on the construction of sequence of motions in claim 1. The viewpoint modification has to be synchronised with that sequence. Thus the claim is not concerned with simply showing the viewpoint of the user change as the avatar moves through the virtual world, it is concerned, for example, with the viewpoint bobbing up and down as the user walks.

Claim 1 as amended

86.

The proposed amendments to claim 1 introduce two matters. The first change is to insert a requirement that the virtual body representation is a computer based model that represents a human or other form in the virtual environment. The point of this amendment was to bolster Philips’ submission that the claim was limited to a 3D virtual modelling arrangement. Since I have accepted Philips’ construction of the unamended claim nothing turns on it although if I had not accepted the point on the unamended claim I doubt this would have advanced Philips’ case.

87.

The second change is to introduce a requirement for an adaptive mechanism. This mechanism is arranged to adapt on the fly to signals from the motion detector in order to translate the user’s erratic, variable signals into a steady motion.

88.

As mentioned already Nintendo submitted that this language was unclear and so an amendment to insert it should not be permitted. It was not disputed and I accept that clarity can be taken into account when considering whether to allow an amendment. Clarity is referred to in s14(5)(c) of the 1977 Act. Although in the context of an amendment the point seems only to arise as a matter of discretion since it is not mentioned in s76, even the narrower approach to discretion mandated by s75(5) will allow the point to be taken since the EPO would also take the point. Although I am not addressing amendment at this stage it makes sense to deal with the clarity now.

89.

The point of the adaptive mechanism is to translate the user’s erratic, variable signals into a steady motion. It does so by adapting on the fly to signals from the motion detector. Philips submitted that in addition to the passages in paragraph 25 of the specification in which the term “adaptive mechanism” appears, paragraph 23 is also an example of the required adaptive mechanism. Here sequences representing various walking speeds are selected based on the input from the motion detector.

90.

There was a debate between the parties about whether what is described in paragraph 14 of the specification was an example of the adaptive mechanism called for by the claim. Nintendo submitted it was not an adaptive mechanism since it was simply a process of time averaging. It was an example of a non-adaptive or “classical” filter in contradistinction to an adaptive filter. Philips did not agree and submitted the paragraph was written in general terms. In closing Philips submitted that this was an example of the claimed adaptive mechanism although Mr Carr submitted that this sort of time averaging was not within the claim because it did not involve predetermined signals.

91.

In my judgment the language relating to the adaptive mechanism in the claim as proposed to be amended is broad. It will cover any system which translates the user’s variable signals into a steady motion as long as it does so by adapting to signals from the motion detector “on the fly”. The words “on the fly” convey the idea of immediacy of the process and mean “as it happens” or “in real time”.

92.

In the context of cyclic motion like walking this can be readily understood. For example as a user walks, the body representation shows walking. The system may be set such that even if the speed of walking slows or speeds up a little bit, nothing changes. However if the real speed increases beyond a threshold, the avatar is now shown a new steady but faster pace. The mechanism has adapted to signals from the motion detector and translated the user’s variable signals into a steady motion.

93.

A possible problem with the system described in the specific embodiments is how to know a user’s stride length since different users may have different stride lengths (or the same user may change stride length). Nintendo argued that this problem showed that the adaptive mechanism could not be understood hence the feature lacked clarity. However one way of making the system work, which Prof Steed accepted, would be by calculating the timing of key signals over a window of time to take account of the number of gait cycles and allow therefore the system to predict when turning points had been reached regardless of irregular motions. This approach only works if the movements being modelled are repetitive in nature. This serves to support Philips’ case that the claim should be read that way and that, read in that way, there is no problem of clarity.

94.

I accept Philips’ submission that paragraph 14 of the specification is not referring to a filter which is non-adaptive in contradistinction to an adaptive filter. I do not have to be concerned with whether or not the paragraph involves pre-determined signals.

95.

There is no material lack of clarity in the proposed amendments to claim 1.

Claim 9 as proposed to be amended

96.

This requires that the movement of at least one part of the virtual body be directed by the measured movement of the corresponding part of the user’s body.

97.

Philips submitted that even if claim 1 did not require measurement of a range of values or continuous measurements such that a switch based system was excluded, claim 9 did exclude such a system. I do not accept that. If a system using a switch or switches is able to measure the movement of a part of a user’s body then there is no reason to exclude such an arrangement from the claim.

The interpretation of claims to computer apparatus generally

98.

The claims of all three patents are computer implemented inventions. The alleged infringements are systems consisting of computer hardware – the Wii games console and certain user input devices – in combination with certain computer software, which will no doubt consist of Nintendo firmware but also and importantly in this case includes particular games. So for the 484 case the system alleged to fall within the claims is a Wii system with a user input device called the Balance Board running a game called Island Cycling. Infringement is pleaded both under s60(1) and s60(2). The latter case being that Nintendo has supplied means relating to an essential element of the invention for putting it into effect (with the requisite knowledge). The means relied on is the Wii hardware. Philips does not contend that dealing in software is said to infringe either directly or under s60(2). At one stage there was a reference to a thing called the Wii Nunchuk but this was dropped in closing. So Nintendo submits that a necessary part of Philips’ case under s60(2) is that the hardware, without the games software installed on it, is suitable for putting the invention into effect since that is part of the test for s60(2).

99.

Distinct from this argument there are the words of the claims. They use a fair amount of functional language. For example “virtual body modelling apparatus” would ordinarily be read as apparatus for virtual body modelling, in other words apparatus suitable for virtual body modelling. Moreover, submitted Nintendo, a normal general purpose computer is suitable for doing this kind of thing since all it needs is the right software to make it work. On this basis the claims may lack novelty over an item of prior art which was suitable for the relevant purposes even though it had not been married up with the software to do it. Moreover this argument is, submitted Nintendo, a squeeze with Philips’ case on infringement which I have outlined above.

100.

Philips did not agree. It referred to the EPO Guidelines For Examination (September 2013 Part F Ch IV-15). These state in paragraph 4.13 that the data processing/computer program field provides an exception to the general rule that “for” means “suitable for” and that in this field apparatus features “of the means-plus-function type” are interpreted as means “adapted to carry out the relevant steps/functions, rather than merely means suitable for carrying them out”. The Guidelines continue “in this waynovelty is conferred over an unprogrammed or differently programmed data-processing apparatus.

101.

Philips also referred to the recent judgment of Mann J on this issue in Rovi v Virgin [2014] EWHC 1559 (Pat) at paragraphs 128-132. Here the learned judge accepted that a claim to computer apparatus for a function would be taken to mean “suitable for” that function but did not accept that this meant that such a claim covered an unprogrammed item of hardware. As he said a bare computer would not be suitable for the activities in the claims because it simply could not achieve them. Mann J noted that the effect of the submission he had rejected would be that such claims would cover all kinds of computers even if they were not configured to carry out the relevant activity and that this would be a striking result. Finally Mann J referred to the passage in paragraphs 73 -74 of the judgment of Floyd J (as he then was) in Qualcomm v Nokia [2008] EWHC 329 which stressed that it was important not to take “suitable for” too far.

102.

I will add the following brief words of my own on this subject not least because the point was I think advanced with more enthusiasm before me than by Virgin before Mann J. First I agree with Floyd J in Qualcomm that one must be cautious of any principle which is said to codify the meaning of words. On the other hand the Court of Appeal in Virgin Atlantic v Premium [2009] EWCA 1062 recognised that drafting conventions may form part of the proper construction of a patent on the basis that the skilled person is taken to know them.

103.

Second claim language of the means plus function type, and I regard “virtual body modelling apparatus” as an example of that type, is generally taken by the granting authority (the EPO) to be read as means suitable for carrying out the function. That is a good reason on its own to interpret such words in that way.

104.

Third, although the problem mentioned by Mann J and by the EPO in the Guidelines is the same, the solutions they offer are slightly different in form although I doubt they differ in substance. I prefer the approach of Mann J to this problem. The fact that a general purpose computer can be programmed to become a virtual body modelling apparatus does not mean that a general purpose computer is a virtual body modelling apparatus nor is it an apparatus suitable for virtual body modelling. It is not. If the right software was installed in the computer but the computer was switched off then that might well be apparatus suitable for virtual body modelling but that is a different point.

105.

Fourth to the extent that there is any inconsistency between these points and Philips’ case on infringement under s60(2) the result may simply be that certain items are not held to be means relating to an essential element of the invention under s60(2).

Allowability of the amendments – added matter

106.

Nintendo contends that the proposed amendments to the claims should not be allowed as they would introduce added matter contrary to s76(2) of the 1977 Act. It submits that the approach to be taken is the one set out by Aldous J in Bonzel v Intervention [1991] RPC 553 and Vector v Glatt (CA) [2008] RPC 10. There was no dispute about the general approach to be taken. One has to compare the disclosures of the application and the patent and ask if anything relevant to the invention has been added. The documents are read through the eyes of a person skilled in the art, imbued with the common general knowledge. The documents are read as a whole. When looking to see what is disclosed in the application one needs to consider not only what is expressly disclosed but what is necessarily implicit but obviousness is not the test. The fact an idea is obvious over the application does not permit its addition to a patent. The test is strict in that matter disclosed in the patent which is not clearly and unambiguously derivable from the application is added matter.

107.

However, as Philips emphasised, the English Courts have long recognised a distinction between an amendment which merely broadens the coverage but does not disclose any new matter and one which discloses new matter (see AC Edwards v Acme [1992] RPC 131, Texas Iron Works [2000] RPC 207 and AP Racing v Alcon [2014] EWCA 40). The principle is not in dispute but its application can be tricky. Take the facts of AP Racing. The claim as granted included a feature (asymmetric peripheral stiffening band (PSB)) which was a generalisation from the disclosure of the application. The application included a clear and unambiguous disclosure of PSBs which would fall within the claim but it did not describe them in that general way. Floyd LJ held that although the claim covered asymmetric PSBs in general, it did not disclose any configuration of PSB which is not disclosed in the application. This does not mean that any generalising amendment is allowable but it emphasises that the fact an amendment is a generalisation does not necessarily mean it is unallowable.

108.

Nintendo referred to the discussion of a particular kind of added matter known as “intermediate generalisation” described by Pumfrey J in Palmaz [1999] RPC 47 and approved in the Court of Appeal in LG Philips v Tatung [2007] RPC 21 and in Vector v Glatt. The passage from Pumfrey J’s judgment is as follows:

If the specification discloses distinct sub-classes of the overall inventive concept, then it should be possible to amend down to one or other of those sub-classes, whether or not they are presented as inventively distinct in the specification before amendment. The difficulty comes when it is sought to take features which are only disclosed in a particular context and which are not disclosed as having any inventive significance and introduce them into the claim deprived of that context. This is a process sometimes called “intermediate generalisation.

109.

These words are a useful description of intermediate generalisation but they are not a statute. I believe the reference to something being disclosed as having inventive significance is not a necessary part of Pumfrey J’s description nor do I understand the later approvals by Courts of Appeal of this passage to have focussed on those words.

110.

Nintendo took two added matter objections to the amendments: adaptive mechanism and claim 9.

111.

I start with adaptive mechanism. Nintendo advanced three points. First intermediate generalisation in taking the adaptive mechanism out of its context, second changing from receiving signals to taking measurements, and third changing from a plural to a singular reference to a user. The second and third points are related. Philips did not accept any of them.

112.

I reject the first point. It is closely related to the clarity argument and the point about a sequence of body motions. I will characterise Nintendo’s case in the following way. The disclosure of the adaptive mechanism in the application (just as in the descriptive part of the specification of the granted patent) is all in the context of walking or other repetitive body motions. Amended claim 1 takes the adaptive mechanism out of that context and into a wider context, into which it does not fit and is added matter. The answer is that the premise is correct but the conclusion does not follow because the claim is still in the same context, i.e. repetitive body motions. There is no intermediate generalisation.

113.

I reject the second point. It is true that the language used in the claim amendment is somewhat different from the language used in the passage in the application relied on as support for it (p7 ln30 – p8 ln1). The text is the same as paragraph 25 of the patent. The two passages side by side are:

This mechanism adapts on the fly to the measurement apparatus output, to translate the users erratic, variable measurements into a steady walking motion. [application]

the adaptive mechanism is arranged to adapt on the fly to the signals received from the user motion detection means to translate the user’s erratic, variable signals into a steady motion. [claim]

114.

The difference relied on is that “the measurement apparatus output” in the application has been changed to “the signals received from the user motion detection means” in the claim and later that “erratic, variable measurements” have been changed to “erratic, variable signals”. Nintendo emphasises that a measurement is different from a signal. I agree that the two things are different but there is no added matter here. That is because the application taught that the mechanism adapts on the fly to measurement apparatus output (my emphasis) and that output is the “the signals received from the user motion detection means”. I reject this argument.

115.

Finally the users, user’s and users’ point. Nintendo points out that the relevant passage in the application (see above) refers to “the users erratic, variable measurements”. This is clearly a grammatical slip since there is no apostrophe. Nintendo argues it would be understood as a plural – users’ – on the basis that the passage was concerned with addressing how different users walk in different ways. Thus the word to be introduced into the claim (user’s (singular)) is added matter. I reject that. While it is true that accommodating different users is one of the ideas in the document it is not the only idea. Accommodating the erratic motion of one person is also contemplated when the document is read as a whole. Thus even if, which I doubt, the skilled addressee would think the correct place for the apostrophe in the application was to make the word a possessive plural, the use of the possessive singular in the claim is not added matter.

116.

The objection to claim 9 is that the passage relied on for support (p5 ln5-10 of the application, corresponding to paragraph 16 as granted) only refers to specific body parts and does not support the generalisation found in claim 9 which refers to body parts generally. The relevant passage is the one at the start of the discussion of the specific embodiments. It is:

The following description is specifically concerned with modelling and controlling the legs of a virtual humanoid body that a user or participant in a virtual world may control, with the mechanism for control of the leg movements of the virtual body being directed by measured movement of the users legs. As will be readily appreciated, many of the techniques described may also be used for controlling movement of the arms and head of a virtual body.

117.

Thus it can be said that the passage itself includes a generalisation, pointing out that while the description to follow relates to modelling legs, the techniques may also be used for the arms and head. Nintendo’s argument is in effect to ask rhetorically why should the patentee be allowed to generalise after the event to any body parts and as a result go even further than it was prepared to go in the document itself?

118.

I reject this added matter attack as well. Although the language in claim 9 is a generalisation as compared to what has been disclosed expressly and by necessary implication in the application as filed, the patent as amended does not disclose any method or apparatus which is not disclosed in the application. It is capable of covering things which are not disclosed but that is not the test for added matter (AC Edwards v Acme etc.).

Infringement

119.

No distinction is drawn between the Wii and Wii U systems. I will start with claim 1 as amended since if that is infringed then so is claim 1 as granted.

120.

The Wii game system includes a console which can be connected to a television set and various units to allow players to interact with the console and play the games. Probably the most well known unit is the Wii remote, a hand held unit which can be swung in the air to play games like tennis. However while the Wii remote is highly relevant to the case on the 498 and 650 patents, it is not so relevant to the case of the 484 patent.

121.

Another unit players can use is a board called the Balance Board. The player stands on the Balance Board and can, for example, make a stepping action. The Balance Board contains strain gauges which allow the motion to be measured and the resulting signals can be used to control an avatar of the player in the relevant game. The Wii uses a full 3D graphics engine. The environment in which the avatar is placed, and the avatar itself, are both rendered in 3D graphics.

122.

For this part of the case Philips relies on a game called Island Cycling. In this game the user steps onto the Balance Board and then effectively runs gently on the spot. On the screen a person is shown cycling. The speed of the real user’s steps on the Balance Board controls the speed of the cycling avatar. If they step faster, the cyclist speeds up and vice versa. The user can control their avatar and cycle around in the virtual world. The evidence also showed that if the user takes an occasional step off the Balance Board as they are running on the spot, this does not alter the speed or the pedal action of the avatar on the screen.

123.

At this stage I will consider whether the Wii system set up this way and playing Island Cycling is an apparatus within claim 1. I find that it is a virtual body modelling apparatus and satisfies this part of claim 1 including the elements of claim 1 as amended which require the virtual body representation to be computer based model that represents the human or other form in the virtual environment. Nintendo did not seriously argue to the contrary.

124.

Nintendo also accepted that the Wii system with the Balance Board in this context fulfils the requirements of a user motion detection means monitoring the movement of the user. In fact the way the system works is by measuring the centre of mass of the user on the board and detecting its shifting from side to side as the user runs on the spot.

125.

Nintendo argued that there were in fact two sequences of body motions, a left foot down sequence and a right foot down sequence and each was keyed to the respective foot of the user. Thus the claim was not satisfied. I do not accept that this is the right way of looking at what is going on. The relevant sequence of body motions in the Island Cycling game is the sequence of cycling motions as the avatar’s legs move up and down on the pedals. The generated body representation is modified to follow this sequence of motions when the system detects the signals from the Balance Board.

126.

Both Nintendo and Philips asserted that the Island Cycling game running on the Wii shows the existence of an adaptive mechanism as called for by the claim but it was not clear whether they were referred to the same aspect of the game.

127.

Nintendo submitted that the fact the cyclist speeded up as the user stepped faster showed the existence of an adaptive mechanism. If the speed of the user’s steps changes, so the speed of the cyclist changes. However the system takes a finite time to apply the correction as it adapts to the new speed. This involves the use of an algorithm belonging to a class known as predictor/corrector algorithms. It is an adaptive mechanism as far as Nintendo is concerned but Nintendo argues that the mechanism is “tuned” to the opposite end of the spectrum as compared to the patent. That is because the game is trying to pick up any variation in motion and reflect it in the motion of the avatar. It is true that very small variations are not picked up and reflected in the motion of the cyclist but that just means the system is just not perfect. In other words it is not adapting to translate erratic variable motion into steady motion because it tries to reflect all changes as faithfully as it can, there are just some very small changes which it cannot reflect.

128.

Philips agreed with Nintendo’s point that the game picks up changes in speed of the user’s steps and reflects it in the speed of the cyclist. Philips points out that at a steady state one entire pedal cycle of the avatar will broadly be in phase with the cycle of the user’s foot steps and when the steps speed up (or slow down) the rate of advancement of the pedal angle of the avatar is adjusted progressively over a number of animation frames to bring the pedal cycle back into broad phase alignment with the steps at the new stepping rate. Philips submits that this is an adaptive mechanism within the claim and argues that a particular experiment conducted for the proceedings shows the effect of the adaptive mechanism. In the test the user is playing the game, taking gentle running steps on the Balance Board. Then for one step, instead of placing their foot on the Balance Board, the user’s foot steps onto the floor at the side. The user does not break the stepping rhythm and the next step with that foot is correctly placed on the Balance Board and the stepping continues. During the process the cyclist’s movements do not change. The cyclist continues to cycle with the pedals turning and their legs moving appropriately. Thus Philips submits the user’s motion in the physical world is not steady, it is uneven, but the avatar maintains a steady pedalling action.

129.

In my judgment the analysis of Philips is the relevant one. The points made by Nintendo are true in fact and they show that an adaptive mechanism of some kind is in operation. However Nintendo’s “tuning” point shows that the function of the adaptive mechanism which simply adjusts the speed of the avatar to the speed of the steps as best it can is not performing the claimed function. It is not translating erratic variable motion into a steady motion. However the stepping off experiment relied on by Philips demonstrates that the adaptive mechanism in the Wii playing Island Cycling does perform the claimed function, because it translates the user’s erratic variable motion into steady motion.

130.

Nintendo also submitted that the stepping off phenomenon did not show the Wii adapting to the “input signals”, in other words to the signals received from the user motion detection means. I do not agree. The system is translating the user’s erratic variable motion into steady motion and it does so by acting upon the input signals. By not showing a change in pedalling when the user misses a step it has adapted to those input signals.

Infringement - claim 5

131.

Claim 5 requires the modulation of the viewpoint. In Island Cycling the viewpoint wobbles in time with the cycling motion. This is exactly the viewpoint modulation called for by claim 5 which is in synchronism with the relevant sequence of body motions. The claim is satisfied by a Wii system playing Island Cycling.

Claim 9 as proposed to be amended

132.

At one stage before trial Claim 9 had been dropped but then Philips sought to reinstate reliance on it and by the opening, Nintendo had consented to that course. However neither expert focussed on infringement of claim 9, no doubt because of its status earlier in the case. Moreover there was little focus in argument either. Read broadly I can see no serious argument about infringement of claim 9. In Island Cycling the avatar’s feet on the pedals move broadly in time to the feet of the user as they step on the Balance Board. However at times in this case Philips have emphasised that the way Island Cycling actually works is by measuring the shift from side to side in centre of mass of the user as they step. It utilises the timing of the transitions between what it interprets as the left foot travelling down and the right foot travelling down. In other words the Wii is certainly not making a direct measurement of the movement of the user’s feet and one might query whether it is really making any measurement of the movement of the user’s feet at all. It is measuring the movement of the centre of mass of the user’s entire body.

Infringement - s60(1) and s60(2)

133.

I will start with direct infringement under s60(1). For this purpose I take it that a Wii system with a Balance Board running Island Cycling is an apparatus within claim 1. The pleadings include an allegation that sale of a Wii in the UK is sale of a product within claim 1. I reject that argument. A Wii console sold by Nintendo to customers is not, in the state it is sold, an apparatus within the claim 1. The game software is available on optical disks which are placed in the Wii console in order to play the game. To be within claim 1 it seems to me that the relevant disk, such as a disk carrying the Island Cycling software, at least has to have been inserted into the Wii unit and a Balance Board has to be connected. In that state, whether the unit is switched on or switched off, it seems to me that the claim is satisfied on the assumption I have made.

134.

Philips also relied on s60(2) although neither party explored the details of the test for infringement under s60(2).

135.

Philips argued that Nintendo infringed the patents under s60(2) of the Act by supplying means relating to an essential element for putting the invention into effect in the relevant circumstances. Nintendo accepted responsibility for sales in the UK and no issue about the state of mind of the supplier arose. The only question is whether the items relied on by Philips are “means essential”. Philips pleaded that each of the following were means essential in relation to the 484 patent: the Wii, Wii remotes, the Wii Balance Board, the Wii U Basic Pack and the Wii U Premium Pack. By the “Wii” in this context Philips means a complete Nintendo Wii home entertainment system which is sold as a package. It includes the console, at least one remote and includes at least one software title. The Wii U Basic Pack and the Wii U Premium Pack are two varieties of the Nintendo Wii home entertainment system which are sold in the UK as an 8GB Basic pack and 32GB Premium pack. For the purposes of this analysis there is no need to distinguish between the Wii, the Wii U Basic Pack and the Wii U Premium Pack. They each comprise a collection of hardware and firmware (including a console, a remote and relevant operating system firmware) and in addition at least one software title on a disk. I will refer to the collection of hardware and firmware with a console and a remote as the “hardware package”.

136.

I have no doubt that a hardware package sold bundled with a Balance Board and a disk carrying software for Island Cycling amounts to the supply of a means relating to an essential element for putting the invention into effect. The same would be true if the package of software included any game which, when loaded, had the same properties relevant to the 484 patent as Island Cycling. I will refer to this sort of software as relevant software. The same would be true if the disk of relevant software was physically separate at the point of sale but included as part of the single transaction.

137.

One different question is whether the hardware package without a disk of relevant software (or Balance Board) is a means essential. This is where Nintendo’s case on the “suitable for” point bites. S60(2) has two elements, subjective and objective. The means must be “for putting the invention into effect” in the sense that objectively they are suitable for putting the invention into effect. There is a second subjective element focussed on whether the means are intended for this purpose but that is not in dispute. Philips’ pleaded case does not limit itself to sales of a hardware package with relevant software.

138.

It seems to me that a Wii console alone and in particular without either any optical disk or a Balance Board is a means relating to an essential element of the invention for putting the invention into effect. It is suitable for putting claim 1 into effect because, when a Balance Board and the relevant optical disk are married up with the console, the product of claim 1 is created. Moreover it is a critical part of that combination. Although superficially the question might look to be similar to the question of whether an unprogrammed Wii console or an unprogrammed general purpose computer is an “apparatus suitable for virtual body modelling” in my judgment these are two different issues.

139.

I also recognise that at least in theory this reasoning could lead to a conclusion that the sale of an item after a patent is granted, which was already in the prior art beforehand, may be prohibited under s60(2) but I do not regard that as an objection. It is how the case in Merrell Dow v Norton [1996] RPC 76 arose. Moreover s60(2) does not provide an absolute monopoly on dealings with means essential. Only sales with suitable knowledge (actual or inferred) are prohibited.

140.

Finally there is a separate question whether a Wii remote or a Balance Board on its own is a means essential. I suppose the Wii remote might act as a means to start every game but that was not explored in submissions. I reject the suggestion that the Wii remote is a means relating to an essential element of the invention in the 484 patent. The Wii remote has nothing to do with Philips’ infringement case on 484. However the Balance Board is another critical element in the combination which creates a complete system within the claims of 484 and I find its sale, with the requisite knowledge will be caught by s60(2).

Novelty

141.

Nintendo contends that all three of the items of prior art relied on WCTM, SEGA and Alpine Racer deprive some of the claims of the 484 patent of novelty (s2(1) of the 1977 Act). I will take each one in turn.

WCTM

142.

World Class Track Meet (WCTM) is a game played on a game console called the Nintendo Entertainment System (NES) and using an input device called the Power Pad. The Power Pad was made by Bandai. The NES console was launched in 1983. The game and the Power Pad were made available in 1988. It is not in dispute that the combination of all three together formed part of the state of the art before the priority date of the 484 patent. I will refer to the combination of all three as the WCTM game.

143.

The Power Pad was a floor mat comprising a number of foot operated pressure sensitive pads. A diagram of the Power Pad from its instructions is shown below:

144.

The WCTM game allows the user to play four different athletics games: 100m Dash, Long Jump, Hurdles and Triple Jump. In the 100m Dash game the player runs on the spot on the mat and the player’s avatar is depicted on the screen. An example of the screen is shown below:

145.

The image on the screen is formed using a 2D sprite based graphics engine. As the player runs, the avatar is animated to run on the screen. The avatar’s legs move up and down. The impression of forward movement is given by the green stripes on the grass moving downwards as the player runs and also by objects on the track moving downwards in the same way. There is also a crude perspective effect whereby the grandstand moves up on one or two occasions as the runner runs to give the impression of moving closer to it. I will focus on the 100m Dash but it is fair to note that in the jumping games, the grandstand rises and dips as the avatar jumps to give the impression of rising and falling.

146.

Nintendo submits that the WCTM game is a virtual body modelling apparatus, that there is a virtual environment and a virtual body representation. The virtual body moves in the virtual environment as the avatar progresses along the track by running (and by jumping in the jumping games). Nintendo also submits that the Power Pad is a user motion detection means. Nintendo submitted that all the other claimed features were satisfied as well. The relevant sequence of body motions was the running action.

147.

Philips contends that WCTM is not a virtual body modelling apparatus. The 2D backgrounds used in WCTM are not virtual environments within the meaning of the Patent. The player cannot navigate in that environment in three dimensions. The movement of the 2D bitmap background images used to portray the racecourse merely provide a rough indication of the player’s progress. Moreover the Power Pad is not “user motion detection means monitoring movement of the user” because it does not “monitor” the player’s movements by measuring a range of values corresponding to the player’s physical movements. The Power Pad only detects button presses and is no different from a computer game on a conventional home computer operated with a keyboard. Finally although Philips did not dispute the point on sequence of body motions, it submitted that the WCTM game did not satisfy the claim requirements for acting on the detection of one or more “predetermined signals” from the motion detector. The game uses all the signals not a subset of signals.

148.

All these issues turn on the construction questions I have already decided. I find that WCTM does not involve a virtual environment and is not a virtual body modelling apparatus. There is no virtual environment as it would be understood by a person skilled in the art at the relevant time in WCTM. For that reason claim 1 as granted is novel.

149.

However I reject Philips’ argument that the Power Pad is not user motion detection means monitoring movement of the user. The claim does not require monitoring a range of values. The Power Pad mat monitors the speed at which a player is running on the spot and therefore monitors the movement of the user.

150.

I also reject Philips’ argument about predetermined signals. The claim is not limited to a system which uses only a subset of signals from the motion detector. A system which uses all the signals will fall within the claim.

151.

At this point I will mention an alleged distinction between the Power Pad and the Wii Balance Board. The Wii Balance Board uses strain gauges whereas the Power Pad uses switches however, in case it matters, I will say that I do not accept there is any real difference in the way they operate in their respective contexts. It was suggested that the Balance Board makes continuous measurements whereas the Power Pad does not. However the evidence was that the switches on the Power Pad are not of the momentary type, which might only register an instantaneous actuation as the user’s foot presses down. The switches on the Power Pad remain actuated while the user’s foot is pressed down and will be on (or off) all the time the foot is down. In other words the monitoring is continuous in that after the foot comes down the signal indicating “foot down” will continue until the foot lifts up again.

152.

Although since claim 1 as granted is novel, it necessarily follows that claim 1 as amended will also be novel, nevertheless it is worth considering whether the WCTM game discloses an adaptive mechanism required by claim 1 as amended. There is no dispute that in operation when the user runs on the spot on the Power Pad in the WCTM game the system will tolerate occasional steps off the pad. The point is shown in a video of some tests which were run by Nintendo’s lawyers. It is essentially the same test as was conducted for Island Cycling. A player runs on the spot on the pad and as a result the runner on the screen runs at an appropriate speed. The player then occasionally placed one foot off the pad between steps. This did not have any noticeable affect on the runner’s running speed. The experiment shows that WCTM is capable of ignoring certain erratic variable signals and depicting a steady motion based on, in effect, the average of the player’s running speed over time. The speed of the runner is determined by the speed of the steps on the pad.

153.

In my judgment this shows that WCTM has an adaptive mechanism as required. It can indeed translate the user’s erratic variable signals into a steady motion and it must be doing so by adapting on the fly to the signals received from the user motion detection means. In the case in which the foot steps off the pad the signals from the user motion detection means will in fact be different from the signals they would have been if the step had not been taken off the pad but the system adapts to that difference by it effectively ignoring it and keeping the runner running at the same average speed as before.

154.

Nintendo submitted that claim 5 was also disclosed in WCTM. Again, although I have held claim 1 as novel, I will consider the point on claim 5. Nintendo relied on the change in viewpoint as the runner runs along the track. However, in my judgment that is not an example of what is required by claim 5. Claim 5 requires the viewpoint to change in synchronism with the following of a sequence of motions. In other words it requires the viewpoint to change; for example as the users head would move up and down as they run. All that happens in WCTM is that the viewpoint changes as the user’s avatar runs along the track. That is not what claim 5 is concerned with and it is not disclosed by WCTM.

155.

If I had accepted that the sequence of body motions in claim 1 could include motion which was not repetitive, it would include the jumping action in the WCTM game. Since the background rises and falls in time with the jumps, it follows that on the relevant hypothesis (which I have rejected) the features of claim 5 would be satisfied.

156.

Nintendo submitted that the features of claim 9 were satisfied by the WCTM game because the movement of the avatar’s feet is directed by the measured movement of the user’s feet. Philips’ denial of this point depended on the construction of claim 9 which I have rejected. I find WCTM satisfies the features of claim 9.

SEGA - Heavyweight Champ

157.

Nintendo had relied on the prior use of the SEGA Heavyweight Champ boxing game and the utility model. They are not the same but for present purposes I do not need to distinguish between them.

158.

SEGA Heavyweight Champ is a boxing arcade game dating from 1987. It allows a player to play a boxing game from a first person over the shoulder perspective with a punching lever in each hand against a computer generated opponent. The player can throw punches and put up blocks by moving the punching levers up and down and in and out. The levers can also be used to rotate the whole of the cabinet side to side allowing the player to mover the avatar to the left and to the right. The graphics are presented using a 2D Sprite based engine. The arcade unit itself can be seen in Fig. 2 of the utility model:

159.

A representation of how the game looks on screen can be seen in Fig. 3 of the utility model:

160.

The display shows background images forming part of the ring and the audience and also shows the opponent boxer inside the ring and an outline of a ‘transparent’ boxer (25) representing the user’s avatar.

161.

The SEGA system uses 2D sprite based graphics and therefore is not a virtual body modelling apparatus with a virtual environment etc. required by claim 1. Moreover I do not accept that a series of the user’s boxing punches represent a sequence of body motions as required by the claim. Accordingly claim 1 as granted is novel.

162.

In relation to claim 5, it is clear that in the SEGA game there is a change of viewpoint as the avatar moves from side to side and forwards and backwards. However, in my judgment that is not enough to satisfy the claim because it is not in synchronism with the following of a sequence of motions, because the punches are not the right kind of body motions. Accordingly, claim 5 is novel in any event.

163.

Even on Philips’ construction the features of claim 9 itself would be satisfied by the SEGA game.

Alpine Racer

164.

Alpine Racer is an arcade game simulating skiing using a full 3D graphics engine and a rotatable platform input device with emulates a pair of skis. It was released shortly before the priority date. The player stands on a pair of steps and holds on to two bars on either side of the steps for stability. The bars have no affect on the game play. The player moves the steps left or right about an axis to control the skier on a screen positioned in front of the player. In speed racing mode the player races against computer controlled opponents to complete a downhill ski race course.

165.

The screen displays the avatar as a skier on a ski slope. The viewpoint is first person over the shoulder and the virtual camera is tied to a position behind the avatar (such as the head). During the game the avatar skis down a course and barriers, piste flags and arrow signs are present to indicate the piste boundary and direction.

166.

When the race begins the animation is triggered in which the skier skates a few steps and then assumes the crouched schuss position. The skier becomes controllable shortly after passing the start line. If the player keeps the steps to the central position the skier maintains the schuss position and travels in a straight line. The player can alter the direction of travel by moving the steps. If the player wants to ski left he moves the steps to the right and vice versa. Moving the steps to the right causes the representation of the skier to lean to the left while the skis push out to the right. The amount of lean is proportional to the amount of deflection of the steps. Even a small adjustment in the position of the steps results in a corresponding small change in the skier’s pose. Thus the game animates the avatar based on the instantaneous position of the steps although there is a degree of latency between the movements of the player and the avatar. As the skier executes turns the virtual camera pivots and a change in viewpoint occurs. The pose of the avatar’s body is determined by a mathematical model based on the position of the feet which are sensed by the input device.

167.

At some points in the game the skier moves with a skating action. Skating occurs when a canned animation is played at the start of the race before the avatar becomes controllable and also occurs at various points in the game. The canned skating sequence at the start did not matter because it was not controllable at all but the other aspects of skating were disputed on the facts. I will return to that below.

168.

In terms of claim 1, there is no dispute that Alpine Racer is a virtual body modelling apparatus that animates the body in the virtual environment nor is there any dispute that the input mechanism is user motion detection means.

169.

Philips did not accept that Alpine Racer fell within claim 1 as granted because there was no stored sequence of body motions which occur in a particular order in response to predetermined signals. First I can dispose of the point about pre-determined signals. This depended on Philips’ case that the term was limited to a subset of the available data but I have rejected that construction. Whether the system uses all or a subset of the signals from the motion detector makes no difference, both approaches infringe.

170.

Nintendo submitted that the sequence of body motions requirement was satisfied by the behaviour of the game while the player executing skiing turns moving side to side. As the player makes these moves the avatar makes corresponding movements from side to side. However Philips submitted that the claim required a stored sequence of body motions, in other words pre-stored data which defines a sequence. Thus it was not satisfied by a system which simply followed a user’s movements which happened to consist of a sequence of body motions. I accept Philips’ submission. The normal skiing in Alpine Racer does not satisfy claim 1 as granted.

171.

Thus to bring Alpine Racer within any of the claims Nintendo has to rely on the skating sequences which occur once the avatar has become controllable. It was not in dispute that the skating was a stored sequence of body motions but almost everything else about it was disputed.

172.

First the parties did not agree how skating was initiated. I am not satisfied that Nintendo proved how skating could be initiated. Their case was based on inferences drawn from watching videos of the game being played. The videos were conducted under the experimental evidence regime in CPR part 63. One question was whether skiing could occur if speed conditions were right by a player making wide side to side movements. It is true that a video shows skating happening on this occasion but the evidence does not establish the causal relationships. It would have been simple enough to prove the point by playing the game twice and making different movements at the relevant time but that was not done. I am not satisfied that the experimental evidence shows that a player can initiate skating in this way. Nintendo also relied on a statement in a US manual for the game but that was not a sound basis from which to make the finding sought. The pleaded prior use of the game was its use in Japan and Nintendo had proved a Japanese manual. The Japanese manual did not contain the statement on which Nintendo wished to rely. Prof Steed accepted in cross-examination that the US manual makes a statement about skating which is not consistent with what can be seen on the videos of the (Japanese) version of Alpine Racer. It states that all other skating apart from the skating sequence at the start is initiated by the player but on any view that is not right. Nintendo described this cross-examination as desperate but I do not agree. There is nothing inherently unlikely about the possibility that the US and Japanese versions of the game differ somewhat.

173.

Unless the player’s motion causes skating to occur the skating sequences cannot bring the game within claim 1. Accordingly I find claim 1 is not satisfied.

174.

Second there was an argument about viewpoint modification. This is not relevant to claim 1 but it is convenient to deal with it now. As Nintendo submitted, it is not easy to spot. Prof Steed gave evidence that he could see the viewpoint being modulated as required by the claim. Prof Darrell could not see it. To my eye the examples relied on by Prof Steed did include modulation of the viewpoint and in addition, since Prof Steed has more experience with computer games than Prof Darrell, in as much as it is a question of the opinions of the experts rather than a matter for the eye of the court, I prefer Prof Steed’s opinion on the point.

175.

Third there was a dispute about whether a corrector algorithm was used as normal skiing is resumed at the end of a skating sequence to smooth out the transition. This point provided another occasion in which the experts and the court peered at the videos. Prof Steed’s view was that a corrector algorithm was being used to get the animated ski position in the right place. Prof Darrell did not agree. He could see jumps in the animation and so could I. Nintendo submitted that one needed to focus on the skis rather than the body. In any event I am not satisfied that the transition between skating and skiing is smoothed out in the manner contended for by Nintendo.

176.

In summary I find that Alpine Racer does not disclose an apparatus within claim 1 as granted. Accordingly it cannot fall within any of the other claims.

Obviousness

177.

The structured approach to the assessment of obviousness was set out by the Court of Appeal in Pozzoli v BDMO [2007] EWCA Civ 588. I will take that approach.

178.

For the purposes of considering obviousness over any of the WCTM game, SEGA Heavyweight Champ and Alpine Racer, I will take the person skilled in the art to be the skilled games system developer. I have identified the common general knowledge of that person (or team) above.

Obviousness: WCTM

179.

Claim 1 as granted has been construed already and this is not a case in which it would be fruitful to identify an inventive concept which relates to claim 1.

180.

The only difference between claim 1 as granted and the WCTM game is that the WCTM game is not a virtual body modelling apparatus and does not involve a virtual environment (etc.).

181.

Nintendo relied on the evidence of Professor Steed who explained what obvious improvements he thought a skilled games system developer would make to WCTM if presented with it in 1995. Essentially, Professor Steed’s evidence was that a skilled person or team upgrade the game to use a full 3D graphics engine. This would involve upgrading the hardware and software but did not involve any inventive step.

182.

The first question is whether a skilled games system developer would seek to improve the WCTM game at all in 1995. Philips submitted that this had been overlooked by Nintendo and that the way in which they instructed Professor Steed meant that the question had assumed in Nintendo’s favour. It is true that Professor Steed was instructed to consider what improvements would be made by a person skilled in the art in 1995.

183.

Philips also relied on the evidence of Professor Darrell. His opinion was that a skilled person in the field of virtual reality research would not be interested in making any improvements to the WCTM game. I am sure that Professor Darrell’s opinion reflect the reality of a person not working directly in the field of computer games. A virtual reality researcher would not be at all interested in a 1980s computer game like WCTM. However, in my judgment, that is not the relevant question.

184.

In my judgment to a skilled games system developer concerned with computer gaming presented with the WCTM game in 1995 the interesting thing is the Power Pad input device and the way that allows a person to run on the spot leading to the animation of a running avatar in the game. Moreover it would be readily apparent to that skilled person that the game used crude graphics by 1995 standards but that the graphics were not a necessary part of the game. They would be sufficiently interested in the WCTM game to consider whether improvements could be made. It is not an exercise in hindsight to consider a skilled person making improvements to WCTM at that time.

185.

Professor Steed’s opinion was from this position that it would be obvious to keep the same input device but upgrade the system to use a full 3D graphics engine. Mr Carr submitted that there was no evidence that such an upgrade to the graphics of a game had ever been done. It is true that the evidence did not give examples of such things happening but I accept Professor Steed’s opinion. In my judgment it would be entirely obvious to upgrade and improve the WCTM game in 1995 to use the state of the art 3D computer graphics and also to keep the same input device.

186.

The result would be a game just like WCTM in terms of the way the user interacts with the game but in which the avatar appearing on the screen would be a virtual body modelled in a virtual environment. The athletics track would be modelled as a virtual environment and the avatar would be modelled as a virtual body in that environment. The running avatar in the 100m Dash would run in a three dimensional world and the avatar would jump in the jump games in a corresponding fashion. The way the game works would be just the same as the way the original WCTM works. So in the 100m Dash the avatar’s running speed would increase as the user runs more quickly on the spot. The system would also behave in the same way as the original as regards stepping off the mat. The running of the avatar would not be sensitive to occasional missteps. The avatar would run steadily despite such erratic motion by the user.

187.

Thus all the elements of claim 1 as granted which were not satisfied by the original WCTM system would be satisfied by the 1995 version of WCTM. Furthermore claim 1 as proposed to be amended would also be satisfied by this upgraded game and so too would claim 9 be satisfied. All these claims therefore lack inventive step over the WCTM game.

188.

Claim 5 raises a distinct issue. The question depends on a more detailed look at how the upgraded game would model the user’s movements and show the virtual environment. The original WCTM game depicts changes in the user’s viewpoint as the avatar runs along the track in the 100m Dash and jumps in the Long Jump. For claim 5 to be satisfied in a running animation like 100m Dash the viewpoint has to be modified to be synchronised to follow the running motion itself.

189.

The viewpoint in the original WCTM game is a very simple first person over the shoulder viewpoint. Professor Steed’s view was that in the upgraded system the same viewpoint would be adopted. Using a 3D graphics engine this would involve notionally attaching the virtual camera to the back of the avatar so that it followed the avatar along the track and in doing so the viewpoint would change with the basic forward movement of the avatar. I agree. That is essentially how the virtual camera operates in the WCTM system itself in as much as you could describe the image in the original game as one created with a virtual camera. It would not involve an inventive step for the skilled person to set up the 1995 version in this way. That means the viewpoint would change with the basic forward movement of the avatar. In the upgraded game using a 3D graphics engine one would see the stand getting closer as the avatar runs forward in the 100m Dash. However that sort of change in viewpoint does not satisfy claim 5.

190.

Philips submitted that there was no example of a game before the priority date in which the viewpoint worked as required but I do not accept things are that simple. The common general knowledge included the game Doom from 1993. It used a different graphics engine from full 3D graphics and also used a strict first person viewpoint which is different from the one used in WCTM. However it also incorporated the idea of the viewpoint bobbing up and down as the avatar moved forwards.

191.

I accept Prof Steed’s opinion that the feature of the viewpoint bobbing up and down as the avatar ran inside the virtual world of the upgraded 1995 version of the WCTM game would be obvious. It arises naturally from the choice of viewpoint fixing the virtual camera to the head of the avatar and modelling the viewpoint that way. Moreover the idea of the viewpoint rising and falling (albeit in a different way) is already present in WCTM in the jumping games. Since the game involves modelling a user running in synchronism, broadly speaking, with the stepping motion of the user on the pad, in my judgment it would have been obvious to modulate the view of the virtual environment in the similar way to the way it was shown in Doom. In order to decide what viewpoint to present on screen a decision has to be made about the position and location of the virtual camera. I accept that there are other possible places to put the virtual camera or ways of doing it. Another way might be to hold the camera steady behind the runner. However in my judgment a choice which produces a viewpoint which bobs up and down was one obvious approach and accordingly claim 5 is invalid.

Obviousness – SEGA Heavyweight Champ

192.

Nintendo contended that, in a similar way to the argument over WCTM, it would also have been obvious to upgrade the SEGA Heavyweight Champ game in 1995 by using a full 3D graphics engine. I suspect that is right for the same reasons that I have considered over WCTM but in my judgment it does not assist Nintendo because the SEGA Heavyweight Champ game does not involve data defining a sequence of body motions or following a sequence of body motions as required in the characterising portion of claim 1. The only relevant body motions in the Heavyweight Champ game are the boxing punches. They do not qualify as a sequence of motions.

193.

Accordingly, although it would be obvious to upgrade the boxing game to use a 3D graphics engine in 1995 the result would still not be within claim 1 as granted. I reject the argument that any of the claims of the 484 patent are obvious over the SEGA Heavyweight Champ game either as disclosed in the patent application or as the prior use of the arcade game itself.

Obviousness – Alpine Racer

194.

The differences between claim 1 as granted and Alpine Racer are (i) for skiing the fact that there is no stored sequence of body motions and (ii) for skating was that the skating motion was not caused by the player’s motion. Nintendo did not agree that either was the case but I have not accepted Nintendo’s submissions. There was no suggestion that it would have been obvious to change the Alpine Racer game so that the skiing motion would fall within the claim (if it did not already) nor was there a suggestion that it would be obvious to change the skating motion to bring it within the claim in this respect (if it was not already). As a result none of the claims can be obvious over Alpine Racer.

The 498 and 650 patents

195.

The 498 and 650 patents relate to a hand held pointing device used to control electrical apparatus. The application for the 498 patent was filed on 28th October 2003 claiming priority from a filing on 20th November 2002. It was granted on 14th December 2011. The 650 was divided out of the application for the 498 patent by a divisional application made in 2009. The 650 patent was granted on 15th May 2013.

The skilled person

196.

Philips submitted, based on Prof Reid’s evidence that the skilled addressee of the patents would be someone with experience of computer vision, human computer interaction and interactive devices. That experience would either have come from a degree in computer science or engineering covering the relevant areas or from relevant industrial experience.

197.

Nintendo relied on Prof Steed’s view of the relevant set of skills as comprising interactive computer graphics, human computer interaction, electronic engineering and product design. These skills may either reside in one individual or a small team. Prof Steed’s label “interactive computer graphics” relates to the same essential idea as Prof Reid’s “interactive devices”.

198.

The major points emerging from the cross examination were that the human computer interaction area was a key area for the patents and is a broad field. Philips submitted that the problem the patents set out to solve relates to pointing devices which process images of their surroundings in order to determine their location and orientation. This statement of the problem does not mention object recognition and is therefore incomplete at least in that respect but otherwise it is a reasonable statement. Its real significance is to make the point that the patents are not focussed on games. That is correct although games are mentioned in paragraph 53 of the 498 specification. Philips submitted that Nintendo and Prof Steed were too focussed on games.

199.

My findings in relation to the skilled addressee of the patents are as follows. First, in order to put the patents into practice, all the skills referred to by Prof Steed would be needed to a greater or lesser extent. The level of experience of such a person or team would be about 3-5 years. Second, Philips is right that the patent is not focussed on games, it is directed more broadly. Therefore in my judgment the person to whom the patent is addressed would be someone (or a team) with the skills identified by Prof Steed but not with a particular focus or experience of games. They would have some knowledge of games as an example of the implementation of the ideas in their field of expertise but that is all. The document should be interpreted from the perspective of that person with the common general knowledge of that person.

200.

However as with 484 I do not accept that this characterisation of the skilled person is necessarily applicable to the question of obviousness (see Schlumberger and Inhale cited above). Nintendo’s case was based on the contention that the invention was obvious over the cited prior art to a person with the general skills described by Prof Steed who was working in and had acquired their experience in gaming. I accept that there were real people in the art of that kind and I accept Nintendo’s characterisation of the skilled person for obviousness.

201.

As with 484, in order to distinguish between the two I will refer to the former as the skilled addressee and the latter as the skilled games system developer.

The common general knowledge

202.

The areas of computer vision, image processing and human computer interaction are highly complex and technical. In opening both sides simply referred to sections of the experts reports. Those sections are lengthy. In closing both sides simply focussed on what was in dispute. The result is that neither side ever set out a concise summary of what they contended were the matters which formed part of the common general knowledge. It is unrealistic to expect the court to produce its own summary explanation of this complex area of technology and I will not do so.

203.

One aspect of the common general knowledge of the skilled addressee relates to computer vision and image processing. This includes knowledge of digital camera technology. A particular area of common general knowledge is that the skilled addressee understands how to work out the pose of a camera from the pictures it takes. A camera in a space has six degrees of freedom. These are the three spatial dimensions which specify its location and three dimensions which specify the direction in which it is pointing. If the computer can recognise some things in the image as known objects in known locations then it can work out the pose. The known objects can be called scene points. With three scene points located in an appropriate way the computer can work out the various parameters making up all six degrees of freedom. With fewer scene points useful information can still be obtained about the camera’s pose.

204.

One known approach was to use visual beacons of known appearance in a scene. For example in the 1980s a motion tracking system called Vicon determined the motion of a person by viewing objects placed on their body. This also illustrates an aspect of cameras and human computer interaction, namely the use of a fixed camera looking at a person to see how they moved. This was common general knowledge.

205.

As a camera moves or as parts of a scene move the brightness patterns of a scene change. The local image motion can be characterised as an optic flow field and that field describes the motion of an image at the instant the image was acquired. It is possible to compute the optic flow field to determine how a camera has moved between successive images.

206.

A point of detail relates to whether motion sensing and position tracking are the same thing. Prof Reid described position tracking as a system that reports the location of an item in a frame of reference. His view was that it did not follow that a system carrying out position tracking was also determining the motion of the item. Conversely a system may use information about movement in order to determine an item’s position and it does not follow that such a system has determined movement for any other purpose. I accept Prof Reid’s evidence about these concepts. Although they are similar, there is a fundamental difference between the idea of working out the position of an item at any given time and working out how an item is moving at a given point in time or how it has moved over a given period. Nintendo submitted that the hardware configuration and the configuration and timing of the sensors is the same in all cases and that the difference depends solely on software. I agree.

207.

Another aspect of the common general knowledge was human computer interaction. The standard way of interacting with a computer at the relevant time was with a keyboard and a computer mouse. At the priority date there were active areas of research into touch screens, voice control and the use of human gestures.

208.

There were known examples of using human gestures for communication with a computer. It was an active area of research. They included the data glove which a user wears. It senses movement of the fingers and hand and transmits this to the computer. I have already mentioned the motion tracking system which used a fixed camera and the user wore coloured objects on their body (or part of their body).

209.

A fixed camera looking in at a scene can be called an “outside-in” system as opposed to an “inside-out” system in which the pose of the camera itself is tracked. The distinction is not always so clear cut but suffices for this purpose. Prof Reid’s evidence was that the only camera arrangements used for recognising human gestures in the common general knowledge were outside-in systems. I accept that evidence.

210.

Two particular examples of gesture based interfaces are worth mentioning. First is a famous one from the early days of this area (1980) called the “Put that there” demonstration. In this system the user made a pointing gesture to indicate objects on a computer screen. Second is a particular example of a gesture based computer interface which had been used in consumer products closer to the priority date. It was the Graffiti system for character recognition in the Palm Pilot. The Palm Pilot was a PDA (personal digital assistant), in other words a small hand held computer. They have been replaced by smart phones today. The hand held device had no keyboard, simply a touch sensitive screen. The user holds a stylus and draws on the screen. The shapes drawn are highly stylised and simplified versions of letters. So A is represented by drawing the stylus up at an angle and then down at an angle. The computer sought to interpret these gestures as letters.

211.

A particular kind of known sensor was an inertial sensor. These could be a gyroscope or accelerometer. An example was the InertiaCube incorporated into IS900 from 1998/9 which included various sensors.

212.

An issue arose about units combining sensors. There was clear evidence that different sensors could be combined if it was thought appropriate. The examples in the evidence were of units which combined an inertial sensor (which necessarily can only sense relative movement) with another which provided a fixed frame of reference. Prof Steed explained that inertial sensors can be combined with a magnetometer or GPS system to get a fixed reference frame and similarly Prof Reid explained that the InertiaCube/IS900 contained an inertial sensor and an ultrasonic tracker. The latter gave a fixed frame of reference.

213.

Prof Reid’s view was that the possibility of using inertial sensing means to detect the position of a user’s head was also recognised at the priority date. I accept that.

214.

By 2002 a particular kind of inertial sensor called a MEMS device or MEMS sensor become available. MEMS stands for MicroElectroMechanical System. They represent a significant advance as a form of accelerometer or tilt sensor because they were cheap, readily available and could be readily integrated into an electronic circuit. The main supplier of these devices was a company called Analog Devices. They were a major development in the market. As a sensor which could be used for motion sensing in an electronic or computer apparatus, MEMS devices were part of the common general knowledge in 2002.

215.

In evidence was an article from 1999 by James Doscher which summarised the potential applications as a form of motion sensor at that time and mentioned games controllers and using them in a PDA. I do not accept that the article itself or its contents were common general knowledge but it is consistent with the fact that MEMS devices were common general knowledge by 2002.

216.

So far I have addressed the common general knowledge of the skilled addressee. I turn to address a point relevant to the obviousness case and the common general knowledge of the skilled games system developer. It relates to games.

217.

At the priority date computer games were generally of one of two kinds. They were either played on a personal computer, i.e. a general purpose computer such as a PC, or else they were played on a games console. Games consoles were dedicated machines for playing games. An example was the Sony Playstation. In order to play games on a console in 2002, the player used a hand held games controller which was supplied with the console. For example there was a range of controllers for the Sony Playstation called DualShock. A conventional hand held games controller in 2002 was a unit held in both hands with a number of buttons to press. It might also have one or two thumb operated joysticks (thumbsticks). It might also be equipped with a direction pad or “D pad” which is a flat four way switch. The controller might include lights and a vibration device to create a rumbling effect and provide feedback to the player. The presence on a games controller of a D pad and a thumbstick gave the player a choice about how to use the controller. Both provided the same joystick input to the game but different players may choose to user different devices. They provided a degree of redundancy.

218.

On the other hand, to play a game on a personal computer the player might simply use the conventional keyboard and/or mouse. They may use a joystick or a special games controller which plugged into a personal computer but these were not always necessary.

219.

So far all the information I have mentioned relating to games was common general knowledge to both the skilled addressee and the skilled games system developer.

220.

By 2002 there were a few games controllers available which used MEMS devices. They included a Microsoft product called the Sidewinder Freestyle Pro and a Logitech product called the Wingman Extreme. In these devices the MEMS sensor allowed the player to play the game by tilting the hand held controller. It was another way of creating a joystick input and so gave players a further choice in addition to a thumbstick or D pad. These controllers were developed from existing games controllers. They could be called tiltpads.

221.

Prof Steed’s evidence was that these controllers were common general knowledge in 2002. Prof Reid was not so familiar with the computer games development industry but did not ultimately disagree. I find that these devices were part of the common general knowledge of the skilled games system developer but nevertheless they were niche products rather than representative of mainstream games controllers. They were not part of the common general knowledge of the skilled addressee.

222.

A second area of common general knowledge of the skilled games system developer related to whether gestures were used in games. In this context it is important to be precise about what is under consideration. Nintendo relied on two computer games from 2001: Black & White and Harry Potter and the Philosopher’s Stone. Both were games for personal computers. In both the user could draw shapes with the mouse which had an effect in the game. In the Black & White game a player plays the role of a god in a virtual world and can cast “miracles” by making gestures with the mouse. So for example moving the mouse in a circle causes a particular miracle to occur. The Harry Potter game allowed the user to cast a magic spell by moving the mouse in a particular way to create a particular shape. For example a keyhole shape casts an “Alohamora” spell which unlocks a locked door. The gestures are all two dimensional in the sense that they are made on a flat surface using a mouse. Prof Steed gave clear evidence that these games were well known to games developers in 2002. I accept that these games and the general idea of using this sort of gesture in a game with a mouse was part of the common general knowledge of skilled games system developers but not the skilled addressee.

223.

Prof Steed also referred to examples of games which used joystick type interfaces in which things happened in the game as a result of a particular sequence of actions with the joystick. For example certain “cheats” could be unlocked. They included Motocross Madness 2 and Mortal Kombat. A set of such actions for various Konami games were known as Konami codes. I accept these games (and codes) were part of the common general knowledge of the skilled games system developer (but not the skilled addressee). However I do not accept that the sequence of joystick actions was regarded as the same sort of thing as the gestures made with a mouse in the games mentioned before. In fact the forensic point of focussing on these joystick sequences was to consider what would happen if a game such as Motocross Madness 2 was played with a games controller which incorporated a MEMS sensor. In effect one then had a player making a gesture to control the computer. I accept that that is what happens when the two are combined but I do not accept that this combination was common general knowledge of anyone.

224.

Another item of software relied on by Nintendo was a web browser called Opera. Although it was used by a minority of users, it was clearly used on a substantial scale and I accept both a skilled games system developer and the skilled addressee would be aware of it as part of their common general knowledge. It included a function whereby strokes of the mouse would be interpreted by the software as commands. So for example a downward motion with the mouse, while holding the right mouse button, would cause the browser to open a new tab. I accept that this was present in the Opera browser which was publicly available at the priority date but I do not accept the existence of this function was common general knowledge.

225.

Finally on this topic Nintendo relied on Microsoft Encarta, which was a form of encyclopaedia program from 1998. For Microsoft Encarta, I was not satisfied on the evidence that it in fact did operate in the relevant manner. A globe is depicted in the program but on the evidence Encarta may simply have rotated the globe as the mouse moved sideways. That is not the same kind of activity as in the other examples relied on.

The 498/650 patent specification

226.

I will consider the specification of the 498 patent. There is no need to address the specification of the 650 patent separately. The point of view is that of the skilled addressee.

227.

The essential idea is a system in which the person holds a device in their hand to use to interact with and control things. The thing to be controlled may be any electrical apparatus. An illustrative example of the idea is shown in Fig. 1:

228.

The user holds a “pointing device” to point to things in a room and issue commands. An example of a pointing device is shown in figure 3:

229.

In this example the device can receive information via a camera (item 302), a gyroscope (item 304) and buttons (308) and can give feedback to the user via a display (316), a light (312), a speaker (314) or by “force feedback means” (306) which vibrate. The hand held device sends electronic signals to a digital signal processor which is shown in Fig. 1 as item 120.

230.

The general idea is that the user will be able to point to something in the room such as the Hi-Fi sound system (item 130) and the system will know that this is the thing to which the user is pointing. The user will be able to control the item. The commands may be given by pressing buttons on the device but the patent also describes commands given by waving the device in the air so as to make a gesture. For example one gesture may indicate that the user wants the Hi-Fi to be switched on or another may indicate that the volume should be turned up or down. The ability to make commands by means of gestures reduces the need for a large number of buttons on the device.

231.

In its most general form the system described is able to work out what apparatus it is that the user wishes to control and also work out what command the user wishes to send. The signals from the camera can be used for both purposes. They are used to identify the object to be controlled and used to determine the motion trajectory in order to decode the user’s gestures. Although ordinarily one might expect the user to point to the thing they wish to command, the patent also contemplates the user pointing at one thing (e.g. a physical calendar) in order to indicate they wish to control something else (an electronic calendar on their computer).

232.

The specification also contemplates that the pointing device may carry another sensor which can sense motion. Examples are given including an accelerometer (referred to as a mass on a deformation sensor), a gyroscope or differential GPS. This other sensor is used for determining the motion trajectory for commands and is not used for working out what thing is being pointed to.

233.

Paragraph 32 of the patent states that:

“Irrespective of whether the device is used for recognising objects, it can be used to send apparatus control data corresponding to specific movements by the user. The intended apparatus in such an application of the pointing device could e.g. be fixed or indicated with a button.”

234.

This is important because it distinguishes between the function of recognising an object to be controlled and the function of issuing gesture based control signals. Quite a lot of the disclosure in the patent is concerned with object identification but in this passage the patent teaches the idea of a system with a hand held device which performs only the latter function on its own and does not carry out object recognition.

235.

The sensor(s) in the device allow the trajectory of the device to be tracked. This information is then used to work out what gesture has been made and to interpret the gesture as a command. Figs. 4a and 4b illustrate part of the process of working out what gesture the user has made with the device:

236.

Fig. 4a represents a case in which the pointing device has been moved upwards. The trajectory of the device itself (and therefore the user’s hand) is determined by the system obtaining signals from the hand held device. The trajectory is marked in Fig 4a as 400. The system has then interpreted the trajectory and recognised it. The interpretation of the trajectory is referred to in the document as a signature. The vector marked 402 is a signature. Similarly in Fig. 4b the user has made a broad sweeping circular motion in the anticlockwise sense (trajectory 410) which the system has detected and has interpreted that as a smooth motion circular arrow (412).

237.

The use of room localisation beacons is described in paragraph 50. These are shown as items 180, 181 and 182 in figure 1. They are used by the signal processor to analyse pictures taken by the camera in order to be able to work out what it is the camera is pointing at and therefore what the pointing device is pointing at in the room. The camera can take images of the beacons and that allows the location of the pointing device and its orientation to be determined.

Claim construction for 498 and 650

238.

A number of points on claim construction arise. Some are best dealt with in the context in which they arise but the main ones can be addressed now. They are:

i)

“suitable for”

ii)

recognising where the device is pointing

iii)

the brackets

iv)

motion sensing means

v)

motion trajectory and analysing gestures

239.

Owing to the proliferation of claims and amendments some of these points relate to multiple claims. I will structure my consideration of construction by reference to claims rather than construction topics.

Claim 1 of 498 as granted

240.

Claim 1 of 498 as granted relates to a system comprising an electrical apparatus and a portable pointing device. The claim requires a camera which is connected to the pointing device so that in operation it images the region pointed to. There must also be a digital signal processor capable of receiving and processing the picture taken by the camera and sending user interface information derived from the picture to the electrical apparatus. The claim is not specific about the location of the digital signal processor. The processing could occur anywhere and parts of it could occur in different places. Part or all of the signal processing can take place in the pointing device or the electrical apparatus.

241.

The characterising portion of the claim includes a requirement for at least one room localisation beacon. The beacon has to emit electromagnetic radiation in order to allow the signal processor to work out to which part of the room the pointing device is pointing. The expression electromagnetic radiation is clearly wide enough to cover both visible light and infrared radiation.

suitable for

242.

The same “suitable for” construction issue which arises in the 484 case also arises in this case. I have addressed it above and the conclusions apply to the claims of the 498 and 650 patents as they do to the 484 patent.

recognising where the device is pointing

243.

Nintendo submitted that the language about beacons and recognising where the device is pointing in the room would not be satisfied by a system which merely knew it was pointing near a beacon. It argued that the arrangement of three beacons in fig 1 would be recognised by the skilled addressee as no coincidence. It is an asymmetric arrangement used to resolve ambiguities in position and orientation of the device. This asymmetric arrangement of three beacons allows the system to derive all six degrees of freedom and unambiguously resolve the pointing direction. Nintendo pointed out that Prof Reid had accepted that without this asymmetry the system could not have determined the difference between the device being held on one side pointing left as opposed to being held on its other side point pointing right.

244.

I accept as a fact that an arrangement of two beacons rather than three will create the possibility of a left/right ambiguity of the kind described although in practice the ambiguity may be resolvable or avoidable in other ways or may simply be tolerated. I do not accept this has anything to do with the construction of claim 1. There is nothing in the language of the claim which would lead a reader to think any particular standard of accuracy was required about the ability to recognise to which part of the room the device is pointing. The fact there may be ambiguities in some cases is irrelevant.

245.

Moreover there is no justification in reading into the claim a requirement for an absolute frame of reference. A system which can determine to which part of the room the device is pointing relative to the beacons will satisfy the claim.

The brackets

246.

A point arose in relation to the manner in which claim 1 had been printed in the B1 specification of the 498 patent. As printed the claim reads as follows: “(180, 181, 182 the system being), in a room …” but it is obvious that the closing round bracket is in the wrong place. It is also obvious from the patent as a whole that what should have been written is “(180, 181, 182), the system being in a room …”. In other words not only was the round bracket in the wrong place but so was the comma. Apart from anything else, these mistakes are clear from looking at the French and German text of the claims. Whether the druckexemplar contained the same typographical error or whether this was a printer’s error, I do not know. The problem with the bracket is minor but the comma could conceivably have had an influence on construction (despite avoiding meticulous verbal analysis). In any case the correction of these mistakes was not objected to by Nintendo. They are the subject of the fourth conditional amendment.

Claim 2 of 498 as granted

247.

That claim calls for a system in claim 1 which further comprises motion sensing means. The motion sensing means is for sensing a motion and/or for calculating a motion trajectory.

motion sensing means

248.

A key dispute between the parties is whether the camera could be a motion sensing means within claim 2. Nintendo submitted it could. It argued that the specification clearly describes using the camera to sense motion and that there was no reason to limit the general term “motion sensing means” to exclude a camera. Nintendo also submitted that the expression “other motion sensing means” in paragraph 52 after a reference to the camera showed that the document itself acknowledged that a camera was a kind of motion sensing means. Philips submitted that the skilled addressee would understand that the combination of claim 1 plus claim 2 was referring to a system with a pointing device which carried a camera and some other sensor for sensing motion which was not a camera, such as a gyroscope or differential GPS system. It submitted that such an arrangement was disclosed in the specification and the document as a whole would be understood in this way. Philips also submitted that looking at claims 1, 2 and 3 together assisted its case. Claim 3, which refers to a system in which pictures from the camera are used to estimate motion, is not dependent on claim 2. That, argues Philips, is because claim 2 would not be understood to cover the case in which the camera is used for estimating motion.

249.

I acknowledge there is some force in Nintendo’s submissions but I prefer those of Philips. The document clearly describes a pointing device which has a camera and also another motion sensor. That other motion sensor is clearly different from the camera. Although grammatically the word “other” in paragraph 53 supports Nintendo, I think a person reading the document as a whole would understand the patentee to be using the term “motion sensing means” to refer to another kind of sensor. The sensor has to be able to sense motion but it is not a camera. Grammatical points to put against Nintendo’s grammatical point are in paragraph 20 line 16-22 and paragraph 42 line 30-32. In those paragraphs a distinction between the camera and the motion sensing means is drawn.

250.

I find that claim 2 does not cover the case in which the pointing device can only sense motion by using the camera. To be within claim 2 (given its dependence on claim 1) there must be a camera on the pointing device and there must also be some other sensor which can sense motion.

251.

A second dispute was whether a sensor which could only detect that movement had occurred was a motion sensing means or was for sensing a motion. The latter phrase is in claim 2 as granted. It is convenient to address this question below, after I have dealt with motion trajectory.

Claim 3 of 498 as granted

252.

Claim 3 requires the motion or motion trajectory of the pointing device to be estimated on the basis of successive pictures imaged by the camera. I will address estimating a motion trajectory below.

Claim 5 of 498 as granted

253.

This claim is dependent on claim 2 and requires that the transmitted user interface information has to include certain details. The language is rather laborious and comes down to a requirement that the details have to be one or both of the motion trajectory itself and/or the characteristic signature. As I have explained above, the latter is derived from the former.

254.

The transmitted user interface information is the information sent from the signal processor to the electrical apparatus to be controlled. A curiosity arises since in claim 1 the transmitted user interface information is derived from the picture taken by the camera but it is clear from the document as a whole that motion trajectory information may be derived from the camera or from the other motion sensor. Is claim 5 limited to information from the camera? The wording appears to be save that claim 5 is dependent on claim 2, which on my construction is limited to non-camera motion sensing means. However this quirk does not cause me to reconsider the interpretation of “motion sensing means” in claim 2 since the anomaly would always be present either way. Claim 2 certainly covers a motion sensor which is not a camera and on that basis the quirk would still arise. I find that claim 5 is not limited to information derived from the camera. It covers both.

Claims 1A, 2A and 3A of 498

255.

In claims 1A, 2A and 3A of the 498 patent, the requirement for a motion sensing means in granted claim 1 is moved into claim 1A. Thus claim 1A would now be limited to a system in which there is a camera and a distinct motion sensing means. Further more or less consequential amendments to claims 2 and 3 are made to take this change to claim 1 into account.

motion trajectory and analysing gestures

256.

A second aspect of the proposed amendment to create claim 1A calls for the digital signal processor to analyse gestures. The gestures are ones made with the pointing device and the analysis is based on a motion trajectory of the pointing device. Philips submitted that a system which simply identified two points in space at which the device had been pointing was not a system within the amended claim. The point can be explained by considering figures 4a and 4b of the patent. Both gestures in figure 4 have distinct start and end points. If a system simply looked at the start and the end points, ignored the path in between and in effect drew a straight line between the two points, then it would be unable to tell the difference between the two kinds of gestures in Fig 4. A circle gesture would be the same kind of gesture as a line gesture. Philips contended that such a system was not within the claim. It supported its argument by emphasising that the claim refers to the analysis being based on a motion trajectory. The motion trajectory is the path actually taken as the pointing device moves over an appropriate period of time.

257.

Nintendo did not agree. It submitted that the motion trajectory on which the gestures must be based does not require anything more than two points. Furthermore Nintendo argued that to “analyse gestures” requires no more than an ability to identify a particular motion of the pointing device. The motion to be identified may be complex, such as a circle but there is no reason why the system should not simply have to be able to identify a simple motion such as a straight line between two points.

258.

I can start with two points. First it was common ground that there are no terms of art involved in the issue. Second although a distinction was drawn in argument between semantic gestures (with a meaning) and mimetic gestures (which aim to mime a movement such as a tennis swing), in my judgment the claim covers both. There is nothing in the claim which provides a basis for such a distinction. Moreover I am not convinced that there is a sufficiently tangible difference to draw such a distinction anyway. An example which arises relating to the Wii makes the point. When a user plays Tennis with the hand held device they purport to mime tennis swings. The swings are not high quality tennis shots and are not intended to be but they are nevertheless mimetic in character. Nevertheless Philips submitted that because the Wii interprets the actual gesture, selects the best shot to which the gesture approximates, and animates the avatar to play that best shot, it follows that the gestures analysed by the Wii are not mimetic but semantic. I do not agree. All the example shows is that there is not a clear difference between the two. The gesture shown in Fig 4b of the patent is both mimetic and semantic in the same sort of way since it mimes the turning of a round control knob to indicate a command.

259.

I do not accept Philips’ submission that the system has to be able to distinguish between a circle gesture and a line gesture. It will always be possible to conceive of two gestures which might be difficult to distinguish from one another and there is no reason why the claim should be limited to a system which can draw that particular distinction. The claim does not specify a level of ambiguity which the system must be able to resolve. Just because the example embodiments show two different features in figure 4 does not justify reading that limitation into the claim.

260.

However I am not convinced that Nintendo’s submission is right either. To reduce the analysis to one based only on two points in space seems to me to give no meaning to the requirement that the analysis is based on a motion trajectory. The idea of a motion trajectory conveys the idea of the path actually taken over an appropriate period of time rather than simply to the locations of the start and the end of any given path. The specification at paragraph 48 ln 36-41 is consistent with this since it refers to estimating a part of a motion trajectory based on two successive pictures taken by the camera. In other words the two successive pictures (or points) allow one to estimate a part of the trajectory rather than the whole trajectory.

261.

I find that to be within claim 1A the system has to base its gesture analysis on a motion trajectory of the pointing device. That is not satisfied by merely basing the analysis on the start and end points of a given motion of the pointing device. It does not have to go as far as distinguishing a circle from a line but the analysis has to take some account of the path taken between the beginning and the end of whatever is deemed to be an appropriate period of time.

262.

I turn to consider claim 2. Philips’ case was that “sensing a motion” in granted claim 2 did not refer to something which was simply able to detect that an object had moved but referred to sensing the overall manner in which the pointing device had been moved during an appropriate period of time, such as whether it had moved in a circle or a straight line. Philips argued that sensing a motion means determining that the motion is e.g. a circle or a line.

263.

I accept the thrust of Philips’ case on this. Out of context the words could cover simply detecting that movement has happened but the description does not mention an example which merely detects that the device has moved. In the patent the purpose of sensing a motion with the motion sensing means is to provide an input into the signal processor in order to go on to estimate a motion trajectory and derive a signature. Read in that context it seems to me that the skilled addressee would understand the language to be getting at the idea that the motion has to be characterised in some way. I am not convinced that “sensing a motion” has to be able to distinguish between a line and a circle as Philips appear to argue but it must involve some characterisation of the nature of the motion. Simply detecting that movement has happened without any form of characterisation is not enough.

264.

In relation to this point the terms “sensing a motion” and “motion sensing means” must bear meanings which correspond to one another. For a sensor to be a motion sensing means it must be able to sense a motion. Thus a sensor which can only detect that a movement has occurred is not a relevant motion sensing means.

265.

The amendment to create claim 2A deletes the reference to “sensing a motion” and changes to reference to “calculating” a motion trajectory into estimating a motion trajectory.

266.

The amendment to create claim 3A removes the reference to “the motion” as an alternative to “the motion trajectory” of the pointing device but nothing arises from this.

Claims 1B, 1C and 1D of 498

267.

The second conditional amendment to create claim 1B is advanced in the alternative to the first conditional amendment (claim 1A). The third conditional amendment (claim 1C) combines the first and the second. As compared to the claim 1A, creating claim 1B also involves changing the reference to “at least one room localisation beacon” to become “room localisation beacons”. Thus it excludes the case in which one localisation beacon is referred to.

The claims of the 650 patent

268.

Claim 1 of the 650 patent uses familiar terms but is not identical to any claim of 498 either as granted or as proposed to be amended. The system must have a camera. There is a requirement that means for estimating motion or a motion trajectory are included in the system. This would clearly be satisfied by a camera with appropriate processing or by another “motion sensing means” as I have construed the term but claim 1 of 650 is not limited to the case in which there is a camera and different motion sensing means.

269.

The system must have at least one room localisation beacon like claim 1 of 498. There is an application to amend claim 1 of 650 to change the reference to “at least one room localisation beacon” to become “room localisation beacons” and make a corresponding grammatical change.

270.

Claim 2 of 650 is similar to but different from claim 2 of 498. Whereas claim 2 of 498 referred to “sensing a motion and/or calculating a motion trajectory” claim 2 of 650 is limited to “estimating a motion or a motion trajectory” and so the estimation process applies in either case.

271.

Compared to claim 3 of 498, claim 3 of the 650 patent has further wording after the word camera but this adds nothing to the meaning of the claim as compared to claim 3 of 498.

272.

Claim 6 of 650 involves language very close to the gesture language to be inserted into claim 1 of 498 as the first proposed amendment. This language bears the same meaning that I have considered above in relation to the 498 patent.

Added matter

273.

Nintendo submitted that the words “at least one room localisation beacon” in claim 1 of 498 and 650 were added matter on two grounds. The first was that the application simply did not disclose the idea of using a single room localisation beacon at all. The second was an argument based on intermediate generalisation. Philips did not accept either argument but proposed amendments to deal with the first point.

274.

I will consider the first point first. The application never discloses the idea of using a single room localisation beacon. The text is always plural and the only figure depicts three beacons. Moreover having a plurality is not a co-incidence as the skilled addressee would understand what the function of room localisation beacons was and that one might well want to use multiple beacons to reduce ambiguities about the position and orientation of the pointing device. Accordingly I reject any suggestion that there could be said to be an implicit disclosure of the idea of using a single beacon. It is neither expressly taught nor necessarily implicit.

275.

The issue is whether the patent as granted discloses the idea of a single beacon. Philips submitted that although the claim obviously covered the case of a single beacon, it did not disclose such a thing and relied on the case law running from AC Edwards v Acme to AP Racing v Alcon. I think the question I have to decide neatly illustrates the difference and the potential difficulty which can arise in this area. In my judgment claim 1 as granted does not only cover a system with a single room localisation beacon, it also discloses that idea. The skilled addressee reading the granted patent would have the idea that one of the things they could build if they put the ideas in the document into practice was a system with a single beacon in it. That idea is conveyed by the claim language. The reader would know that the primary purpose of claims is to delineate the monopoly and define the invention but they are also part of the disclosure of the document. I find that claim 1 as granted (of both patents) introduces new matter into the disclosure. It is contrary to s76 of the Act (Art 123(2) of the EPC) and is invalid.

276.

It is convenient at this stage to deal with the amendment to the granted claims which Philips advances as a fall back to deal with this problem if the court finds added matter. The amendment replaces “at least one room localisation beacon” with “room localisation beacons”. I will assume appropriate amendments are made to the consistory clause in paragraph 14 of 498. The phrase “room localisation beacons” is soundly based in the specification of the application as filed (p13 ln2). Nintendo submitted that the disclosure of the application as filed was limited to three beacons and that this change caused added matter. I reject this for two reasons. First in my judgment the disclosure of the application is not limited to the idea of three beacons. The text would be understood as referring to three as an example. The fact that three are shown in figure 1 and three reference numerals appear in the description does not mean the reader would think three was mandatory. Nor does the fact that the reader would understand why three beacons were used and were shown asymmetrically make any difference.

277.

For my second reason I will assume in Nintendo’s favour that the only idea disclosed in the application is of using three beacons. I will not assume in Nintendo’s favour that the disclosure is limited to the asymmetric pattern in fig 1 since in my judgment it plainly is not. In this case the proposed amendment satisfies the principle in the cases from AC Edwards v Acme to AP Racing v Alcon. The patent with that amendment in claim 1 will cover the case in which beacons (plural) are used (2, 3, 4 or any larger number). However when read in the context of the patent specification as a whole it does not disclose the idea of using a number other than three. Any reader who understood that the only idea disclosed in the application was of using three beacons due to the figure and the reference numerals in the body of the specification would not think that any other number of beacons was disclosed in the patent since it still has the same figure and reference numerals in the body of the specification and the claim in the amended form does not disclose anything else.

278.

Nintendo’s second point based on intermediate generalisation is as follows. In the application the only reference to room localisation beacons is in the paragraph between p12 ln27 and p13 ln12. This passage corresponds to paragraph 50 in the granted patent. Here the room localisation beacons are referred to in a sentence which explains that the beacons are present so that the signal processor can recognise to which part of the room the pointing device is pointing. Although this language could be read broadly, Nintendo contends it is a disclosure in a very specific and limited context. The beacons are disclosed only as part of the specific embodiment and as part of a method for providing information to a particular entity in that specific embodiment. The entity is the identification improvement unit or identification improvement means (IIM). It is item 210 in the block schematic diagram of the digital signal processor set out in figure 2:

279.

The function of the IIM is to improve identification of objects or commands. It operates logically downstream of the OIM (object identification means) and the SIM (signature identification means) and comes into play because the identification of the object by the OIM might be incorrect or similarly the identification of the command by the SIM might be incorrect. Various methods of improving identification are referred to in the paragraph. Nintendo argues that the single sentence at p12 ln30-31 is concerned with improving the identification of commands and then from there onwards the paragraph is concerned with improving object identification until the last sentence in the paragraph brings it all together with a reference to possible use of Bayesian probabilities or fuzzy logic to arrive at a more certain identification of the object or the command. This is at p13 ln9. The reference to room localisation beacons is within the paragraph in the part which Nintendo submits is concerned with object identification.

280.

Nintendo submits that the granted patent involves added matter because the granted patents present and claim the beacons entirely stripped of that original context and purpose. The claims and the disclosure omit any reference to identification improvement or to object identification at all. As part of this added matter case Nintendo draw attention to other amendments which were made between the application and the granted patents. The application discloses three objects for the invention. They are at p2 ln8-15. Whereas the 498 patent has a new first object at paragraph 10 which is different in scope. It is to provide a system which improves the recognition to where the portable pointing device is pointing. (Paragraph 10 also refers back to paragraph 1 which itself has been amended as compared to the application.) The inference to draw, submits Nintendo, is that this new first object itself plays a role in the process of adding matter by subtle changes in context. The new object is clearly taken from the words used to describe the room localisation beacons in the relevant passage in the disclosure. By taking the idea and presenting it this way, Nintendo submits the context has been changed and the more significant change in context been disguised and made less apparent.

281.

Philips denies all of this and submits there is no added matter at all. First it relies on the AC Edwards v Acme line of cases and denies that the patent discloses new ideas as alleged by Nintendo, such as the allegedly new idea(s) that beacons can be used to recognise which part of the room is being pointed to in a system which does not comprise an object identification means or an identification improvement means. Second it argues that the application clearly discloses the idea of sensing a motion trajectory which represents a command even when the device is not used for recognising objects. That is the passage at p6 ln31 – p7 ln1 corresponding to paragraph 32 as granted. Furthermore the disclosure at p12-13 of the application is not limited to using the beacons for object identification but also includes command identification. Third, Philips argues that the express disclosure of the application was that the room localisation beacons allow the signal processor to recognise the part of the room to which the pointing device is pointing. The granted patent discloses no more than that. Fourth, to apply the test for added matter, the disclosure has to be considered as it would be understood by a skilled addressee in the light of their common general knowledge. It would be clear to the skilled addressee that the use of the beacons to help determine which part of the room was pointed to was a generally applicable technique and not limited to use for identification improvement.

282.

I will start with the disclosure of the application. The clear teaching of the application is that the beacons are there so that the pictures of them taken by the camera can be used by the digital signal processor in order to recognise to which part of the room the pointing device is pointing. That is the purpose of the beacons. The teaching about beacons is in a paragraph concerned with identification improvement. The paragraph is concerned with improving recognition of commands and/or objects and I do not accept Nintendo’s argument that the beacon disclosure is limited only to objects. A number of distinct ideas are disclosed in that paragraph. There is no reason to limit the teaching about beacons to objects as opposed to command identification. Nintendo’s submission involves sieving the language too finely. Nevertheless it is true that the paragraph is all about identification improvement

283.

Philips is also right that the application expressly discloses a system in which there is no object recognition at all and a user’s gestures made with the pointing device can be identified as commands for a fixed apparatus. That is what is taught in the application at p6 ln31 – p7 ln1 corresponding to paragraph 32 as granted. Moreover the application also includes a clear teaching that motion trajectories can be sensed using the camera. Mr Speck called this a secondary idea. I do not accept that characterisation. It is a clear and express part of the disclosure.

284.

Turning to the granted patent, claim 1 (of both patents) states in terms that room localisation beacons are there “for use by the digital signal processor in order to recognise to which part of the room the pointing device is pointing”. That is the same purpose for the beacons as is disclosed in the application and the new first object clause in the granted patent discloses no more than this.

285.

However Nintendo submits that the removal of the beacons from their original context has led to the disclosure of new matter. I do not agree. First I accept Philips’ AC Edwards v Acme point. I accept the claims cover the use of beacons in contexts other than identification improvement but they do not disclose that idea to the skilled addressee. They also cover using beacons for gesture analysis as well as object identification but they do not disclose that idea either.

286.

Second, even if, which I do not accept, the granted patent implicitly discloses using the beacons for something other than identification improvement, that is not different from the application. Read as a whole the application necessarily and implicitly contains the same disclosure. First the point of the beacons is to help work out where the device is pointing. That information can be used for recognising things and not only for improving recognition. Second the beacons can be used for recognising commands and not only for objects. Third the system need not be used to recognise objects at all and can be solely concerned with gesture based commands of a fixed device. All of these statements apply to the application as much as the patent. There is no difference in the implicit disclosure.

287.

Finally I will say that I do not accept Philips’ point based on common general knowledge. It is right that in principle the question always has to be decided by considering the disclosure to a skilled person in the light of the common general knowledge but the common general knowledge cannot be used to add to the disclosure. Patentees sometimes fail to disclose something which would be obvious to a skilled addressee reading the patent application. But obviousness is not the test for added matter. If the disclosure was not clearly and unambiguously present, expressly or by implication, then it cannot be added. This fundamental but subtle aspect of the test is also a reason why the evidence of experts is rarely of any use in deciding such questions. I did not find the evidence of either expert of assistance in relation to added matter.

288.

A final argument from Nintendo was that there was no disclosure in the application of using, for gesture analysis, the non-camera based motion sensing means. In other words on the construction of motion sensing means advanced by Philips and which I have accepted, there was added matter. As far as I know this point was not pleaded but it is a bad point in any event. Nintendo supports its argument by submitting that the only reference to gesture analysis in the application is at p11 ln19-22 and that this is about using the camera. This is wrong. The passage refers to the camera as an example. Figure 2 clearly discloses using signals from the motion sensing means which is other than the camera, to estimate motion trajectory and thereby analyse gestures. Signals from the camera can also be used but the teaching is plainly not limited to that.

Impact on the claims of 498 and 650

289.

The result of my findings so far is that in 498 the granted claim 1 is and amended claim 1A would be invalid for added matter but the claims 1B, 1C and 1D avoid the added matter problem. In 650 granted claim 1 is invalid but the amendment cures the added matter problem.

The amendments and double patenting

290.

I have dealt above with the various language objections to the proposed amendments to the claims of the 498 and 650 patents and identified which sets of claims are formally allowable. That leaves the issues of novelty and inventive step but also a further objection to the amendments taken by Nintendo. It submitted that even if the amendments to the 498 patent were formally allowable, they should not be permitted by this court in the exercise of its discretion, because they would lead to double patenting. If the matter was before the EPO they would be refused by the EPO for double patenting. The submission that they would be refused in the EPO is important because the court’s discretion to allow amendments under s75 of the 1977 Act is now limited. Section 75(5) requires the court to have regard to the relevant principles applied in the EPO.

291.

I start with identifying what double patenting is. Section 18(5) of the 1977 Act provides that:

“Where two or more applications for a patent for the same invention having the same priority date are filed by the same applicant or his successor in title, the comptroller may on that ground refuse to grant a patent in pursuance of more than one of the applications.”

292.

Thus the Comptroller is able to stop an applicant with two effectively identical patent applications from getting two identical patents. At first sight the logic of this is simple enough. It is hard to see why an applicant might want two such patents anyway but one can see that if an applicant did file two truly identical applications then it could lead to trouble and confusion for third parties. The Comptroller is therefore authorised to prevent it and refuse to grant more than one of them.

293.

Where the double patenting question becomes more significant is when it is applied to cases in which the two patents are not identical to each other in form but are found as a matter of substance to be for the same invention. An example of the application of s18(5) by a Hearing Officer for the Comptroller is IBM’s (Barclay and Biggar’s) Application [1983] RPC 283.

294.

Section 18 is in Part 1 of the 1977 Act and applies to national (i.e. UK) applications being dealt with by the Comptroller. It does not apply to applications pending before the EPO. However Art 139(3) EPC permits but does not require contracting states to enact a similar double patenting prohibition dealing with parallel national and EP patents. The United Kingdom has implemented such a provision in s73(2) of the Act. It deals with double patenting when an EP (UK) patent and a national UK Patent have been granted for the same invention with the same priority date with the same applicant. In that case the Comptroller will revoke the national UK patent but before doing so the patentee is given the chance to make amendments to the claims to remove the problem.

295.

Section 73(2) was considered by the Court of Appeal in Marley’s Roof Tile [1994] RPC 231. In that case Aldous J at first instance had decided that while the words “same invention” did not require identicality they did require practical similarity. He held that the purpose of the section was not to prevent overlapping monopolies since that was dealt with by s2(3) of the 1977 Act and so the fact that a product might infringe claims of both patents did not mean the section applied. On the facts Aldous J found for the patentee. On the Comptroller’s appeal to the Court of Appeal the court (Balcombe, Butler Sloss and Mann LJJ) overturned Aldous J’s decision. They noted that the argument that the purpose of s2(3) was to prevent double patenting had been rejected by the House of Lords in Asahi [1991] RPC 485. They held that the judge’s construction that the section was not to prevent overlapping monopolies would make it easy to evade and that the correct construction was what the Court identified as the literal one. If the claims of the two patents cover the same invention then the section is engaged regardless of whether other linked inventions are also covered by the claims of either patent. Thus the Court of Appeal’s judgment held that this double patenting provision in s73(2) applied to overlapping claims and not only to claims which were or the same (or practically the same) scope.

296.

In the EPO the question of double patenting arises in the context of divisional applications. The EPO examiner may raise a double patenting objection to the claims of a later divisional application. The objection may be taken as a ground for refusing proposed claim amendments. The objection can be overcome by appropriate amendments. Many issues relating to divisional applications were reviewed by the Enlarged Board of Appeal in a decision on 28 June 2007 based on two references G1/05 Divisional/ASTROPOWER and G1/06 Sequences of Divisionals/SEIKO. At paragraph 13.4 the EBA said:

“13.4

The Board accepts that the principle of prohibition of double patenting exists on the basis that an applicant has no legitimate interest in proceedings leading to the grant of a second patent for the same subject-matter if he already possesses one granted patent therefor. Therefore, the Enlarged Board finds nothing objectionable in the established practice of the EPO that amendments to a divisional application are objected to and refused when the amended divisional application claims the same subject-matter as a pending parent application or a granted parent patent. However, this principle could not be relied on to prevent the filing of identical applications as this would run counter to the prevailing principle that conformity of applications with the EPC is to be assessed on the final version put forward (see point 3.2 above).”

(My emphasis. The last sentence of the quoted paragraph relates to a different point – whether the objection could be taken to prevent even the filing of an application from the outset. The EBA held it could not.)

297.

Historically the general approach of the EPO was that the objection was taken to ensure that the subject matter of the divisional differed from the parent. The objection had not been taken if all that had happened was that a claim in a parent patent and a claim in a divisional overlapped in their coverage. However Nintendo cited the decision of Board of Appeal 3.3.07 in T307/07 (ARCO/Double Patenting) on 3rd July 2007. In that case the Board deduced from Art 60 that the EPC prohibits double patenting. The first sentence of Art 60 provides that the “right to a European patent shall belong to the inventor or his successor in title” (my emphasis).

298.

In T307/07 the Board also held that a double patenting objection can be raised when the subject matter of the granted parent claim is encompassed by the subject matter of the divisional claim. In other words when the divisional claim is broader than the parent claim and entirely encompasses it. The Board regarded this as a case in which an applicant was seeking to re-patent the subject matter of an already granted patent claim and in addition obtain patent protection for other subject matter not claimed before. An amendment to the divisional claim would be needed to exclude the parent and limit the divisional claim to subject matter not already covered.

299.

However on 7th November 2008 in T 1391/07Board of Appeal 3.4.02 held that the fact that a claim in a divisional might encompass the subject matter already claimed in a granted parent did not engage a double patenting objection. In that case the divisional claim encompassed embodiments covered by the parent claim but also covered other things as well. In paragraphs 2.5 and 2.6 this Board held that double patenting related only to a situation in which the same subject matter was being claimed twice, in other words to claims with the same scope of protection. There was no reason to deny the existence of a legitimate interest (as identified in G1/05) in securing a divisional claim with a scope of protection different from, although partially overlapping with, the scope of the parent case.

300.

Philips cited decision T 1423/07 Cyclic Amine Derivative/BOEHRINGER INGELHEIM of 19th April 2010 in which Board of Appeal 3.3.02 considered double patenting and G 1/05. This Board found that there was no uniformity between contracting states to the EPC about double patenting and disagreed with decision T 307/07 that Art 60 EPC could be used a basis for refusing a European patent application for double patenting. The Board also held that if the applicant had a legitimate interest in the grant of the subsequent application comprising subject-matter already included in a granted patent then Art 60 did not justify refusal of the later application. On the facts the Board held that because for one patent the 20 year term would end a year before the other patent, the applicant had a legitimate interest in obtaining the grant sought. They would lose a year’s worth of protection otherwise and the appeal was allowed.

301.

Having considered these materials I derive the following principles relating to double patenting:

302.

First, the idea at the heart of the double patenting objections is that ordinarily an applicant should not obtain two patents for the same thing filed at the same time. That is because an applicant ordinarily has no legitimate interest in doing this.

303.

Second, however “double patenting” is not a ground of revocation of a patent. It is not in s72 of the Act nor Art 138 EPC. In the UK there are particular circumstances in which double patenting can lead to refusal (or revocation on the Comptroller’s initiative). They are defined by statute (s18(5) and s73(2)).

304.

Third, the EPO does recognise a double patenting objection as a ground for refusing amendments to a divisional application. Two conditions have to be satisfied. The proposed amended divisional claim has to claim the same subject-matter as a parent and the applicant has to have no legitimate interesting in obtaining the divisional claim. Generally if the first condition is true the second is likely to follow but there can be cases, such as T 1423/07 in which a legitimate interest in obtaining the divisional claim can be shown to exist irrespective of the relationship between the scope and subject matter of the parent and divisional claims.

305.

Fourth, the EPO does recognise that if the independent claim of a divisional has the same scope as an independent claim in the parent then double patenting exists and an amendment which would give rise to that state of affairs will be refused. It is a test of substance and not merely form. It is not the settled jurisprudence of the EPO that double patenting exists merely because the scopes of the two claims overlap.

306.

Fifth, one needs to take care when comparing the different procedural circumstances in which this point can arise. The EPO only deals with one patent at a time and so an EPO case considering a divisional application will not contemplate making changes to the claims of the parent to overcome an objection. Although post-grant centralised amendments are now possible in the EPO, that is a fairly recent development. In the UK the point only arises in two very specific statutorily defined circumstances.

307.

Sixth, the judgment of the Court of Appeal in Marley’s Roof Tile is directed to a point on statutory construction of s73(2) of the 1977 Act. Although both s73(2) and the objection applied by the EPO are referred to as “double patenting” and have the same underlying rationale, the Court of Appeal’s judgment is not binding on the question arising in relation to the exercise of discretion under s75 of the Act.

308.

Seventh, a patentee may have a legitimate interest in obtaining a divisional patent with claims which are broader than but encompass the scope of a parent patent. During prosecution of the parent the examiner may object to a broad claim but indicate that a narrower claim would be accepted. The patentee may not agree but may recognise that to win the point will need many more months or even years of proceedings and possibly appeals. This is true in both the EPO and UKIPO. However in the meantime the patentee may want to obtain an early grant because a competitor has launched an infringing competitive product. The infringing product may be very close to the patentee’s invention and within the narrow claim on offer. At an early stage in this new market for a new product the patentee’s business may be particularly vulnerable and the loss caused by the infringement may well not be fully compensatable in damages under s69 of the 1977 Act (Art 67 EPC). Thus the patentee decides to take what is on offer and obtain grant of the parent patent with a narrow claim. Under s76 of the Act and Art 123(3) EPC post grant amendments are not permitted to widen the scope of monopoly so, in order not to give up scope to which the patentee is entitled, a divisional application is filed. If the divisional is granted with a broader scope than the parent then the patentee’s stance has been entirely vindicated.

309.

In my judgment a patentee in the case I have described has a legitimate interest in obtaining the divisional in addition to the parent and it would be wrong to apply a double patenting objection based on overlapping scope such as in T307/07 or Marley’s Roof Tile to prevent this. I also do not believe that a disclaimer or carve out amendment from the divisional to remove the scope of the parent claim should be required since such negative features can introduce uncertainty and make the claims hard to interpret.

310.

I find that as a matter of UK law a double patenting objection taken as a ground for refusing a post-grant amendment to a claim can be taken but should only be taken in the following circumstances:

i)

The two patents must have the same priority dates and be held by the same applicant (or its successor in title);

ii)

The two claims must be for the same invention, that is to say they must be for the same subject matter and by this I mean they must have the same scope. The scope is considered as a matter of substance. Trivial differences in wording will not avoid the objection but if one claim covers embodiments which the other claim does not, then the objection does not arise.

iii)

The two claims must be independent claims. This necessarily follows from the rejection of the point on overlapping scope. If two independent claims have different scope then there is no reason to object even if the patents contain dependent claims with the same scope. The point might arise later if an amendment is needed e.g. to deal with a validity attack but in the case the point can be taken then.

iv)

If the objection arises in the Patents Court in which both patents are before the court then it can be cured by an amendment or amendments to either patent.

v)

Even if the objection properly arises in the sense that two relevant claims have the same scope, if the patentee has a legitimate interest in maintaining both claims then the amendment should not be refused.

Do the amendments lead to double patenting?

311.

Subject to double patenting and so far I have decided that the second, third or fourth conditional amendments to the 498 patent are allowable and so is the amendment to the 650 patent. However the final conclusion depends on the impact of the other invalidity attacks.

Infringement

312.

For the purposes of considering infringement of the 498 and 650 patents, the Wii system can be taken to comprise three items: a console, a Wii remote and a sensor bar. The console contains the main processor and graphics system. It is connected to a domestic television. The sensor bar is a small rectangular object about 24 cm long. When the Wii is set up in the home, the sensor bar must be placed either just above or just below the television screen and the system must be told which location has been used. At either end of the sensor bar there is a small array of infrared LEDs. These emit infrared light which can be detected by an infrared camera on the Wii remote.

313.

As mentioned above before, the Wii remote is a hand held unit used to make gestures and thereby play games. It contains an on-board processor and battery and can communicate with the console via the Bluetooth wireless interface. In addition to the infrared camera, as input devices the Wii remote has some momentary buttons for the user to press and a three axis accelerometer. A more advanced form of Wii remote called the Wii Remote Plus also has a three axis rate gyro sensor but nothing turns on that. The Wii remote has three output devices: visible light LEDs, a sounder and a rumble motor.

314.

Philips relies on the Wii Tennis game to establish its infringement case. In Wii Tennis the user holds the Wii remote and swings it, in effect, as if it was a tennis racquet. On the screen the player’s avatar plays tennis. The shot played by the avatar corresponds to the manner in which the user swung the remote.

315.

Philips also relied on another game called Okami. The difference between Tennis and Okami is that Tennis uses the accelerometer whereas Okami uses a combination of the camera for some gestures and the accelerometer for others. The way the case was argued at trial focussed on Wii Tennis and no distinct issue arose relating to Okami. They stand or fall together.

316.

The correct starting point for infringement is to consider the system when it is up and running, playing a game such as Wii Tennis.

317.

The Wii system comprises an electrical apparatus (the console), a portable pointing device (the Wii remote), a camera (the infrared camera in the Wii remote), motion sensing means (the three axis accelerometer and in the Wii Remote Plus the gyro sensor). The system has a digital signal processor with the relevant features. There was no issue about whether the digital signal processing takes place in part either all on the console, all on the Wii remote or in part on both. The camera can image a region pointed to. The digital signal processor is arranged to analyse gestures made with the pointing device based on a motion trajectory of the pointing device.

318.

Nintendo denied infringement on the basis that there are no room localisation beacons and that the system (or the signal processor in the system) does not recognise the part of the room the pointing device is pointing too. The two points are really the same. Plainly the infrared LEDs in the sensor bar are beacons and plainly images of them taken by the infrared camera on the remote are used by the system. Nintendo’s submission is that all the Wii can do is recognise if the controller is pointing at the sensor bar and so it does not have the “proper” ability to recognise to which part of the room the pointing device is pointing. Therefore it is outside claim 1 of both 498 and 650. In other words Nintendo’s point on beacons is that they are only “room localisation” beacons if the system can do what Nintendo refers to.

319.

Stated in this way Nintendo’s case is wrong on the facts. The Wii system is not limited only to being able to recognise a binary question of whether or not the remote is pointing at the sensor bar. Subject to an ambiguity I will mention, it can determine which direction it is pointing to albeit that determination is only ever relative to the beacons on the sensor bar.

320.

However this is not the real issue. The real issue is that Nintendo submits the determination I have described is not sufficient to bring the Wii into the claim. First Nintendo argues that the ability to generate a determination of direction relative to the sensor bar is not enough. That depends on a point on claim construction which I have rejected. It is true that there is nothing in the Wii system which fixes the location of the sensor bar relative to the room as a whole but that does not matter. The fact that the user could move the television and therefore the sensor bar so that on different occasions relative to the room the pointing device would be pointing at a different place in the room but the same place relative to the sensor bar does not matter. What is important is that the pointing direction of the hand held device can be understood relative to a known frame of reference, the sensor bar. That is enough.

321.

Second Nintendo argues that the Wii’s detection of pointing direction using the camera is ambiguous because only two beacons are used. On this Nintendo referred to Prof Reid’s evidence in which he accepted that the Wii cannot give unambiguous recognition of the direction because it cannot distinguish between a left handed person holding the remote pointing one way and a right handed person holding the remote another way.

322.

Philips submitted that the left handed/right handed scenario was contrived and Prof Reid made the same suggestion at one point in his cross-examination. In my judgment the scenario demonstrates a fact about the inherent nature of aspects of the Wii system. By using only two beacons rather than three asymmetric beacons there is a degree of degeneracy about the beacon configuration in the Wii. It would be possible to make a different system by modulating the beacons in order to allow the processor to distinguish between the left hand beacon and the right hand beacon but that is not how the Wii works. Mr Carr submitted the point was contrived since the Wii remote has an obvious top side but I am not convinced this meets the point. It may be that the Wii can distinguish between the two cases by using the accelerometer but this was not explored in evidence and may well be irrelevant since the claim feature relates to what can be done using electromagnetic radiation from the beacons. If this aspect of the system takes the Wii outside the claim then so be it. I do not accept that this scenario is so far fetched as to be irrelevant to infringement.

323.

However I find that this characteristic does not lead to a conclusion of non- infringement. I can see nothing in the claim language to exclude a system which in some very specific circumstances encounters this particular kind of ambiguity in its determination of pointing direction. Even if the particular scenario is not one within the claim, the system clearly can, on other occasions and always only relative to the beacons, recognise to which part of the room the pointing device is pointing.

324.

By the closing there was no infringement point taken based on a motion or motion trajectory, on transmission of motion trajectory or a characteristic signature or on gesture analysis. No other point arose on infringement and I find the Wii system when it is running Wii Tennis or Okami falls within claims 1, 2, 3 and 5 of the 498 patent as granted and as amended in all of the various proposed amendments. It also satisfies claim 1 of 650 as granted, claim 1 as amended and claims 2, 3 and 6 of 650.

325.

As with 484, under s60(1) I find that the sale of a Wii console plus remote plus a disk of relevant software would not amount to the sale of a product within the claims. In this case relevant software means Wii Tennis or Okami or any other software with the same characteristics relevant to 498/650.

326.

Philips pleaded case under s60(2) was that each of the following were means essential: the Wii, Wii remotes, the Wii U Basic Pack and the Wii U Premium Pack. The section relating to infringement of 484 defines these terms. A point which did not matter for the 484 patent is that while Wii U Premium Pack includes a sensor bar the Basic Pack does not, although the console is still designed and intended to be used with one. Again I will refer to the collection of hardware and firmware with a console and a remote as the “hardware package”.

327.

As with 484 and for the same reasons, I find that a hardware package (whether with or without the sensor bar) sold bundled with a disk carrying relevant software for Wii Tennis or Okami amounts to supply of a means relating to an essential element for putting the invention into effect. Also for the same reasons I find that this hardware package without a disk of relevant software is a means essential. The Balance Board is not relevant to 498/650. When the sensor bar is not included, it would have to be added to bring the system within the claim but that does not mean a package sold without a sensor bar is not a means essential. In my judgment it is.

328.

Unlike the position on 484, for the 498 and 650 patents I find that the Wii remote on its own is a means essential. It is the pointing device referred to in claim 1 and forms a critical part of the combination needed to put the claims into practice.

Novelty

329.

Nintendo relied on three citations in support of their allegation of lack of novelty: Wacom, the Philips application, and Sony.

Wacom

330.

The Wacom application was published in 1995 in Japanese. The parties were able to agree a certified translation into English. It is fair to say that the English in the agreed translation is sometimes somewhat stilted but the essential meaning is clear enough.

331.

The invention relates to a data input device to be used in a computer system such as a personal computer or a game machine. There is an input device called a “hand manipulated type data input device used to input pointing data for cursor control, text data or other key data by manual operation”. “Key data” means information obtained from typing a key such as on a keyboard. An example of the Wacom device is shown in Figure 1:

332.

The hand held device has keys (9), buttons (11a, 11b) and has a “projected light beam detector”, in other words a camera (10). The display in Fig. 1 has two light sources which are beacons just like the sensor bar in the Nintendo Wii. The system can detect the position and orientation of the Wacom hand held device by analysing the image of the beacons determined by the camera. Just as the sensor bar in the Wii can lead to potential ambiguity since there are only two beacons, so the Wacom system tolerates the same degree of degeneracy.

333.

Using the camera the two beacons and image processing the system is able to determine what it refers to as “4D data” about the hand held device. The 4D data consists of the position in space in three dimensions x, y and z and the angle θ representing the tilt angle of the manipulator.

334.

The point of the system is to “input pointing data for cursor control, text data or other key data” (paragraph 1). In other words it replaces a computer mouse and to allow the cursor on the screen to be moved by manipulating the hand held device just as one would move the cursor on the computer screen by moving a mouse. An alternative embodiment is described in which the user wears a clip which includes a light beam projector and the camera on the hand held device has been turned round and points back towards the holder. This could be used, for example, by a user standing on a commuter train.

335.

It is accepted that claim 1 as granted of the 498 patent lacks novelty over Wacom. The construction of claim 1 which is satisfied by the Wii also covers Wacom. The Wacom system does recognise to which part of the room the pointing device is pointing. This conclusion also applies to claim 1B.

336.

There is no disclosure in Wacom of a hand held device which has both a camera and a further separate motion sensing means such as an accelerometer. Any claim which includes a requirement for both a camera and a separate motion sensing means will be novel over Wacom. Claims 1A, 1C and 1D of 498 are therefore novel on that basis. So too is claim 2 (and therefore claim 5).

337.

The position relating to claim 1 of 650 (whether as granted or as amended) depends on whether Wacom includes “means for estimating a motion or a motion trajectory of the pointing device”. A similar question arises for claim 3 of 498.

338.

Nintendo submits that the Wacom device does estimate a motion or motion trajectory. Philips does not agree. It submits that the Wacom device is a replacement for the computer mouse and simply moves a cursor on the screen based on the manipulation of the device. This does not involve estimating or analysing a motion or motion trajectory. In my judgment that is right. There is no description of any analysis of motion or motion trajectory at all. The fact that the cursor moves as the hand held device moves does not mean there is any estimation of motion or motion trajectory taking place.

339.

However Nintendo submitted that there were examples in the art of application software available for a personal computer in which movement of the mouse was estimated. The examples which I have accepted were the two computer games from 2001, Black & White and the Harry Potter game and the Opera web browser from April 2001. All of these involved gesture analysis based on gestures made with a computer mouse which I have described in the common general knowledge section above. Nintendo submitted that necessarily therefore they involved estimation of motion. I have described the gestures

340.

The gestures are all two dimensional in the sense that they are made on a flat surface using a mouse but in my judgment Nintendo is correct that in these examples the processor is estimating a motion or a motion trajectory of the mouse and is analysing gestures made with the mouse based on the motion trajectory.

341.

The teaching of Wacom is to make a computer system, such as a personal computer, in which the mouse is replaced by the hand held pointing device. In my judgment Nintendo is correct that a system made according to Wacom which ran any of Black & White, Harry Potter or Opera would have all the characteristics of claim 1 of the 650 patent or claim 3 of the 498 patent. However I do not accept that this argument can succeed as a novelty objection. Novelty requires the prior art to clearly and unmistakably disclose something within the claim or disclose something which would inevitably be within the claim if carried out. The combination is not inevitable.

342.

A separate argument is as follows. What is disclosed by Wacom is a combination of hardware and operating system software which works in a particular way (the “Wacom system”). The submission is that the fact that the combination of the Wacom system with one of these items of application software would be a system within the claim shows that the Wacom system itself is suitable for performing the relevant functions. Therefore the Wacom system as disclosed is within claim 1 even if the particular combination with the relevant application software would never have been made before the priority date. I do not accept that submission. What is disclosed by Wacom is a system with software which will determine the 4D data described but no software in it which is suitable for estimating a motion or a motion trajectory. Unless and until the application software is actually combined with the Wacom system then the system is not suitable for performing the claimed functions.

343.

Accordingly I find that claim 1 of the 650 patent (as granted and as amended) and claim 3 of 498 are also novel over Wacom.

344.

In summary, for 498 claim 1 as granted and claim 1B lack novelty over Wacom. All other claims of both patents are novel over Wacom.

Philips Application

345.

The Philips Application was published in October 2000. It describes a remote control for a display apparatus. A top view of the hand held device is shown in figure 1 as follows:

346.

The hand held device works by adding “orientation means” to a normal TV remote control. The orientation means is a camera. The camera obtains positional information by imaging three spots which appear on the computer screen. These spots are plainly room localisation beacons as required by the claims (an alleged point of distinction is addressed in the section dealing with Sony below). The point of the system is to manipulate the cursor on screen just as a computer mouse would. Thus claim 1 of 498 as granted lacks novelty over the Philips application.

347.

The document refers to detecting and analysing movement of the projected image (e.g. p2 ln12 and ln23), to detecting relative movement between the hand held device and the display (p5 ln19) and to converting the measured translation or rotation of the hand held device into commands (p5 ln30). Although this language refers to movement, what the document is describing is not the same thing as estimating a motion or a motion trajectory as required by the claims of the 498 and 650 patents nor is the document describing analysing a gesture. What the Philips application is describing primarily is a system in which the movement of a cursor on the display screen is governed by the movement of the hand held device.

348.

Detecting rotation of the device is also mentioned but this means detecting the angle the device is sitting at. It is not estimating motion or a motion trajectory.

349.

In one embodiment the orientation means has an activation means such as a button to press to turn it on. One example of the activation means is “a grip or tilt sensor for detecting the event of picking up the remote control apparatus”. Figure 1 shows an embodiment with a grip sensor (6). The point of having some kind of activation switch is to avoid the camera being a drain on the batteries in the remote control. Although Prof Steed took the view that a tilt sensor could have been an accelerometer or gyroscope, I do not accept that these more sophisticated devices are disclosed by the Philips Application. Whether they are obvious is another matter (below). The kind of tilt sensor which the skilled person would regard as being referred to was something like a simple mercury tilt switch. This detects that the device has tilted. The fact it has tilted is interpreted as being the result of picking it up.

350.

Nintendo submitted that the tilt sensor was a motion sensing means and was within claim 2 or claims 1A, 1C or 1D of 498. Philips submitted that a simple mercury tilt switch can only detect that a motion has occurred and so is not within claim 2 of the 498 patent. I accept that submission. The mercury tilt switch is not a motion sensing means and cannot sense a motion.

351.

Furthermore to be within claim 1A, 1C or 1D of 498 the system must include a digital signal processor arranged to analyse gestures made with the pointing device based on a motion trajectory of the pointing device. Similarly to be within claim 1 of 650 (as granted and as amended) there must be means for estimating a motion or a motion trajectory of the pointing device. Finally even for claim 5 of 498 there must be transmission of some motion trajectory information.

352.

In cross-examination Mr Speck put to Prof Reid that a mercury tilt switch would allow the system to detect an action consisting of tipping the device forwards and then backwards. Prof Reid agreed. An important subtlety in Counsel’s question was that it involved two steps, detecting a tip backwards and then separately detecting a tip forwards again. Nintendo submitted that this showed that the device could estimate a motion or motion trajectory and analyse a simple gesture.

353.

Mr Speck’s question is a nice point but it needs to be treated with care. There is no disclosure in the Philips application of doing this. The tilt sensor is disclosed simply as an activation device.

354.

I do not accept that the system described in the Philips application, which simply activates the orientation system when the tilt sensor detects motion is analysing a gesture based on a motion trajectory. No motion trajectory has been determined at all. Nor do I accept that such a system is thereby estimating a motion or estimating a motion trajectory of the pointing device.

355.

There is no teaching in the Philips Application to program the remote control device along with a suitable television receiver to interpret a two step tipping up and down gesture in the manner referred to the question put to Prof Reid. Without programming it in that way I do not accept what is disclosed is suitable for performing such a function. The issue raised by the cross-examination might have been something to consider as a matter of inventive step but there was no suggestion that any skilled person would think of doing this starting from the Philips application and so it is not relevant.

356.

In summary I find claim 1 and claim 1B of 498 lack novelty but no other claims of either patent lack novelty over the Philips application.

Sony

357.

The Sony application was published in 2002. The document is written in Japanese but the parties were able to agree a translation. It is lengthy and discloses a number of embodiments. In parts it is somewhat confusing. Essentially the Sony disclosure is of an electronic game system which uses a hand held unit in order to play the game.

358.

Figure 1 relates to the first embodiment. It is as follows:

359.

In Fig. 1 the hand held unit is shown as a gun. The system also generates a target (item 15 in fig 1) and the object of the game is to shoot the target. To achieve this the user tries to aim at the target and the system calculates the “aim point” of the gun. This is depicted in fig 1 as item 16. The gun contains a camera and there are identifiers (items 14) depicted on the screen. The aim point of the gun is analysed by using the image obtained from the camera.

360.

The second and third embodiments are not materially different from the first embodiment. The fourth embodiment includes two aspects worth mentioning. First the user is given two items to hold – a gun and a shield. Strictly speaking a shield does not have an “aim point”, but is has the equivalent, namely the place at which the shield is pointing in order to protect the user from virtual shots being fired by a virtual opponent. Second, the fourth embodiment discusses using camera images based on multiple exposures. This means that one image includes multiple images of the identifiers over a period of time as the gun moves.

361.

Philips characterised these first four embodiments as a position detection system. The gun is a pointing device and the system determines the aim point of that device. Nintendo relied on the frequent references in the Sony application to detecting “movement” of the identifiers and put this to Prof Reid. Philips submitted that Nintendo’s case put to Prof Reid was inconsistent with Prof Steed’s evidence about the first four embodiments. Philips contended that Prof Steed accepted that the first four embodiments were position detection systems.

362.

The passages relied on by Nintendo and put to Prof Reid indicate that the determination of the aim point can include a step of correcting what would otherwise be determined to be the position of the aim point using a “moving body predictive analysis” (see paragraphs 34 and 44-49). In other words the system would monitor the path of the aim point of the moving gun and make a prediction based on that information. In my judgment that is the correct reading of the document and I believe Prof Reid agreed with it. However it does not mean that Sony is here teaching a system in which the movement of the gun as distinct from its position is being used as an input to give commands to the apparatus other than in the very limited sense which I have already described. Movement is used to try and accurately determine the aim point, i.e. the position and orientation, but that is all. Nintendo relied on more general language in the document (e.g. paragraph 17) but read in the context of the document as a whole I do not accept there is any wider teaching in Sony (subject to the fifth embodiment below). Prof Reid did not read paragraph 17 in the way that Nintendo contended for either.

363.

In the fifth embodiment the hand held unit is not a gun, it is a ping pong bat. The bat has a camera which images the identifiers on the screen. The embodiment is shown in figure 27:

364.

On the screen a virtual ball is hit towards the human player. The human player performs the action of hitting the ball back and that hits the ball displayed on the screen back towards the virtual player (paragraph 181 and 183). There is the same position detection system as the other embodiments (paragraph 182). The system detects the position and orientation of the bat at the moment which corresponds to the time when the virtual ball would reach the user. This allows the system to calculate a hit ball trajectory as if it were a real ball actually hit from the detected position and at the detected angle of the bat (paragraph 183).

365.

Nintendo submitted that in implementing this embodiment the system could work out the position of the bat in the “z direction”, i.e. the distance between the bat and the screen. After all if the bat is close to the screen the time the ball reaches it will necessarily be earlier than if the bat is further from the screen. It is possible to calculate distance in the z direction by measuring the apparent separation of the identifiers as they appear to the camera on the racquet. Although it was put to Prof Reid on the basis that it “could” be done I accept it is disclosed by Sony.

366.

Nintendo submitted that all of this showed was that Sony was teaching the monitoring the swing of the bat and the use of a mimetic gesture as an input to the game. I do not accept that. All the disclosure I have referred to so far teaches is to calculate what the position and orientation of the bat is at a given point in time. Movement is used in a predictive way to improve that calculation but that is all. There is no wider disclosure in Sony.

367.

Another aspect of the fifth embodiment is the contact sensor mat 84 which is described in paragraph 185 as follows:

“Further, at the time of a smash, for example, the user steps down strongly on the contact sensor 84. In this instance the time at which the contact sensor 84 was stepped on is deemed a trigger point and position detection is performed by the asynchronous shutter.”

368.

The asynchronous shutter referred to is a way of controlling the camera shutter so as to take a picture at a given instant. What is described is an alternative way to determine the moment in time at which the ball is deemed to hit the bat and therefore to calculate the position and orientation of the bat in order to simulate a shot. There is no more disclosure here of monitoring a swing than elsewhere.

369.

In paragraph 186, the document states that “the position detection system 5 is applicable not only to ping pong smashes but also similarly to tennis smashes, golf shots, baseball batting, dance steps, and so on”. No details are provided for this and I will address what the skilled person might or might not do in the light of them when considering inventive step.

370.

Finally Nintendo referred to paragraph 190 of the Sony application which states that:

“game player (user) movements and actions similar to actions in the game world can be used as input data to reflect real world actions in virtual worlds such as games and enable highly accurate movement detection enhanced in real-time feel”

371.

Read with hindsight in the context of this case, the language could be understood as referring to a system which monitors movement such as the swing of the bat and uses mimetic gesture as an input to the games. However read in the context of the document as a whole I do not accept that anything is disclosed by this paragraph (or the document as a whole with this paragraph in it) beyond what I have identified already. The system which is described lets a user perform actions, such as the swing of a table tennis bat in order to notionally “hit” a virtual ball in the virtual world of the game but all that the system is actually interested in is the position and angle of the bat at the relevant moment. The player may think otherwise but a system working as described is actually just determining the position and orientation of the bat at an appropriate instant and simulating the shot accordingly. There is nothing in Sony for example which describes trying to determine the speed of the swing in order to use that speed as an input into determining the flight of the ball. It is true that the passage refers to highly accurate movement detection but in my judgment it is important to see that in context. The document is concerned with accurately deriving the position and orientation of the hand held unit at a given moment. A slight movement means there will be a slight change in position. That is all the reference to movement is concerned with.

Sony and the claims

372.

I find claim 1 of 498 as granted is disclosed by Sony. The only debate could be about room localisation beacons. Clearly the identifiers are room localisation beacons but Philips submitted that the claim required them to be separate from the display (the same point applied in relation to the Philips application). I do not accept that this is the right construction of the claim. Thus claim 1 of 498 as granted lacks novelty.

373.

There is no disclosure of a hand held device with a camera and further separate motion sensing means and so claims 2, 5, 1A, 1C and 1D of 498 are novel (but not claim 1B).

374.

Claim 1 of 650 does not require a hand held device with a camera and further separate motion sensing means but it does require “means for estimating a motion or a motion trajectory of the pointing device”. I find that this element is satisfied by the Sony application. The system does estimate the motion of the hand held device. It uses that information for one and only one purpose (to make a better determination of the position of the aim point or in the fifth embodiment the position and orientation of the bat at a given moment) but that is not excluded by claim 1 of 650. Thus I find claim 1 of 650 (whether as granted or as amended) lacks novelty over Sony.

375.

Claim 2 of 650 – which is limited to a further separate motion sensing means as well as a camera - is novel.

376.

Claim 3 of 650 – which involves estimating motion with successive camera images – is not novel. That is how Sony works and claim 3 is not dependent on claim 2. The same conclusion follows for claim 3 of 498 (as granted).

377.

Claim 6 of 650 calls for a system in which the digital signal processor analyses gestures made with the pointing device based on the motion trajectory. That is not disclosed by Sony. The motion information, which I will accept for this purpose could be regarded as a motion trajectory, is not used in the manner required by claim 6. It is only used to determine aim point / position and orientation. There is no analysis of gestures disclosed in Sony.

378.

In summary for 498 claims 1 and 3 as granted and 1B lack novelty and for 650 claim 1 as granted or as amended and claim 3 also lack novelty over Sony. The novel claims are claims 1A, 1C and 1D of 498 and claim 2 and 6 of 650.

Obviousness

379.

Obviousness arguments are developed over the same three citations, Wacom, Philips application and Sony, against all claims. I remind myself that the correct approach is the Pozzoli approach. For this purpose I will consider the matter from the perspective of the skilled games system developer. The common general knowledge has been identified above. Neither side advance an inventive concept for any of the claims distinct from the exercise of construing them. The differences have been identified in the novelty section.

380.

It is convenient to mention one of Nintendo’s points at this stage since it applies to obviousness generally. Nintendo referred to Sabaf v MFI [2004] UKHL 45 and argued that even if the claim requires a camera and a different motion sensing means there is no inventive step in the combination as there is no synergy or technical effect from having the separate features. Nintendo contended that Prof Reid accepted this but he did not. His evidence was that there was a complementary relationship between a camera and other motion sensing means. The camera allows detection of motions with more precision whereas the other motion sensor allows detection of motions with a jerkier character. What he did accept was that the patent does not teach “data fusion”, in other words using data from both the camera and motion sensor simultaneously. He also accepted that the complementary relationship was not spelled out in the patent but he thought it would be seen once the patent was implemented. I accept his evidence. In my judgment this is not a case in which the principle described in Sabaf applies.

Inventive step: Wacom

381.

There are two inventive step cases to consider. The first relates to the combination of the Wacom system with application software which analyses mouse gestures. Philips called it the software case. The second is whether it was obvious starting from Wacom to include a MEMS-based sensor in hand held device, particularly in the context of developing the Wacom device as a game controller. Philips called this the tilt pad case.

Wacom – the software case

382.

Taking the Black & White computer game as an example, Nintendo’s argument is that there can be no inventive step in having the Wacom device on a computer playing Black & White. The result of this combination would be a system with a hand held pointing device which analysed gestures made with the pointing device based on the motion trajectory of that device. The device would be using successive pictures of the beacons imaged with the camera to determine its pointing direction. This combination would fall within claim 1 and 1B of 498 but not the other amended forms of claim 1 of 498. It would also fall within claims 3 and 5 of 498 if it fell within claim 1. As regards the 650 patent it would also fall within claim 1 either as granted or as amended and also dependent claims 3 and 6. It would not fall within claim 2 of 650.

383.

Philips took an objection on principle to this argument. The objection was that the application software relied on for the argument had not been shown to be common general knowledge and so the argument was not open to Nintendo because it was not pleaded in the statements of case. Mr Carr referred in passing to ratiopharm v Sandoz [2008] EWHC 3070 but he did not refer to a particular passage. Mr Carr also pointed out that the software relied on was not referred to in Prof Steed’s first report.

384.

The point addressed in ratiopharm was the practice of referring simply to “common general knowledge” in the Grounds of Invalidity without any particulars. Floyd J (as he then was) explained that more details should be required, especially in a case in which the party wishes to advance an attack based on common general knowledge alone. The clear practice today is not to permit an argument developed over common general knowledge alone to be advanced unless it has been distinctly pleaded out. However it is not the case (at least in the High Court rather than the IPEC) that patentees routinely pleaded out common general knowledge which they intend to rely on as something to be added to a cited reference in an obviousness case. I will refer to this sort of common general knowledge as secondary common general knowledge.

385.

Moreover the real point cannot depend on whether the secondary material is or is not common general knowledge. The fact that something is not common general knowledge does not mean that it is necessarily irrelevant for inventive step. It may be that, starting from a primary document, it is obvious to make a combination with a secondary document even if the secondary document is not common general knowledge. If the secondary document had been part of the common general knowledge then that will no doubt make the combination more likely but the law does not exclude as a matter of principle the possibility that the primary and secondary documents would be combined without inventive step even if neither was common general knowledge.

386.

Although software packages like Black & White, Harry Potter and Opera are not paper documents they are well defined items and can be regarded as documents for this purpose. I will use the term “citations”.

387.

The key factors which bears on whether to permit Nintendo to take the point are the following.

388.

First, I think that in future an argument that it was obvious to make a combination between two citations is one which should always be distinctly pleaded. This argument is a good illustration of why that should be done. It depends on very specific details of the secondary citation in this case. No-one reading Nintendo’s grounds of invalidity would have an inkling about this argument since that document only refers to Wacom and makes no mention of the relevant software. No-one reading Prof Steed’s first report would understand that this case was being advanced either. To permit it to be run in circumstances like these could work real injustice to a patentee.

389.

However, second, the legal teams on both sides are sophisticated and experienced patent practitioners. If Philips had wanted to ask for further information about any secondary citations or secondary common general knowledge on which Nintendo intended to rely they could have asked. One reason why such requests are not made by patentees is that they provoke the other party to ask for similar detail about the patentee’s case. Therefore I do not accept that I should hold it against Nintendo that they did not plead any secondary references on which they might rely (whether common general knowledge or not).

390.

Third, by the time the matter came to trial the point was fully developed in the evidence and argument. The time the objection should have been taken was earlier than the patentee’s closing submissions.

391.

For these reasons I will permit Nintendo to take the point.

392.

I turn to consider the objection itself. Mr Speck put the case as an illustration of the difference in perspective which one sometimes finds between asking whether a step was obvious to take and asking whether a combination involves an inventive step. He put this argument in the latter category. What could be inventive about the combination of a standard personal computer with a special hand held mouse (Wacom) and an item of publicly available software which ran on a standard personal computer (such as Black & White)?

393.

I have found that the two games and the existence of Opera (but not its gesture aspect) were part of the common general knowledge of skilled games system developer in 2002.

394.

Two key points are these. The field of application of Wacom included personal computers and second, Prof Reid accepted that without any modification Wacom is suitable to use with personal computer games which use a mouse input such as Black & White and Harry Potter and with the Opera browser.

395.

If one asks the question: would it occur to a skilled games system developer reading Wacom to thinking of using it to play Black & White in particular (or Harry Potter or use the Opera software) then I can see the answer might well be no. Why would they? However if one asks the question posed by Mr Speck, does the combination involve an inventive step, then it seems to me that the answer is also no. A person reading Wacom would expect to use it as a personal computer and thereby run any normal software on it which involved mouse input. There is nothing inventive in happening to run publicly available software like Black & White, Harry Potter or Opera on such a computer. No problem is solved by doing it and no technical advance exists as a result. It is simply an example of using the Wacom system for the very purpose it was disclosed. There is nothing to prompt a user of the Wacom system to go and seek out the specific examples of software relied on but while that sort of question may be relevant in some cases, I do not accept it is relevant in this case. I find that the various claims identified already do not involve an inventive step having regard to the state of the art.

396.

The point does not depend on whether the software was known to exist as a matter of common general knowledge as long is it was publicly available. It might be different if the software was utterly obscure but that is not the case. In any event the examples relied on were common general knowledge. The point applies to Opera as much as the games because it does not depend on whether the behaviour of the software was itself common general knowledge since the combination is not being made in order to consciously exploit that behaviour.

397.

In summary I find this argument leads to the conclusion that claim 3 of 498 as granted and claims 1, 3 and 6 of 650 either as granted or as amended do not involve an inventive step over Wacom.

Wacom - the tilt pad case

398.

Nintendo submitted it was obvious to adapt the Wacom mouse functionality for a general purpose gaming controller and that in doing so it would be natural to include a MEMS type motion sensor in the controller under development. In order to address the claim features concerning gesture analysis and motion trajectory, Nintendo submitted that such a controller would be suitable for performing these functions and also submitted that it could be used with some well known gesture based games such as Motocross Madness 2. In the same vein Nintendo relied on the entry of Konami codes.

399.

I accept that a skilled team would regard what was disclosed in Wacom as relevant to the development of a game controller. Prof Reid did not disagree and the document refers in terms to games machines. In taking this forward without invention I accept that the skilled team would adapt the keys and buttons and add extra sensors like a D pad and thumbstick. I also accept that by 2002 the skilled team would use infrared LEDs as the beacons (and a suitable camera) instead of lights as disclosed by Wacom and that it would be obvious to either add them to the console or put them on a separate bar as in the Wii.

400.

However the critical issue is whether it would be obvious (or would involve an inventive step) to add a MEMS device to the hand held device and at the same time keep the camera. Prof Steed’s clear opinion was that this was an obvious step to take but Prof Reid did not agree.

401.

In his first report Prof Steed explained that his reason for adding a motion sensor to the Wacom device was to augment the camera since a good game controller would need to be able to sense fast motion and a very fast camera to achieve this in 2002 would be costly and require processing power.

402.

Prof Reid did not accept this for two reasons. First he thought the skilled person would use the camera in the Wacom device to produce a mouse like functionality. While it could be used to aid the set up of games, the skilled person would only be motivated to direct game play using that position sensing ability in as much as it was possible to do so using a mouse. Second, originally he did not think accelerometers were used in games controllers before the priority date. In fact they were and Prof Reid accepted that in chief. He accepted that the combination of computer vision plus inertial sensors existed in research systems but did not accept the skilled person would have the motivation or knowledge to do that here.

403.

Nintendo submitted it was an obvious option to add a MEMS device because redundancy was a well known aspect of games controllers. However as Prof Reid explained, the redundancy which was present in games controllers already was to allow for different types of input modality because different users may prefer one or the other. So for the known controllers some users may prefer using the D pad with their finger or thumb while another may prefer to tilt the controller itself to achieve the same result. On the other hand adding a MEMS sensor alongside a camera would produce a device in which movement of the hand held controller would be sensed by two sensors in different ways but from the user’s point of view there would be only one input modality (movement).

404.

In cross-examination Mr Speck put to Prof Reid that the two input modalities were different because the camera system provided a mouse like function whereas the MEMS sensor would be used to provide a tilt functionality as in the known games controllers like the Sidewinder Freestyle Pro but Prof Reid maintained that the two sensors were analogous.

405.

I should also take into account that at one stage in his cross-examination, Prof Reid explained that he was not really qualified to judge as he was not an expert in games system development. Prof Steed of course has considerable experience relating to computer games.

406.

In my judgment it would not have been obvious for the skilled team developing a games controller over Wacom to add a MEMS sensor but keep the camera. Prof Steed’s reason for adding a MEMS device was to enhance camera speed. However necessarily that means the team has decided it wishes to use the camera for game play but decided it is inadequate. In that case they would probably dispense with the camera altogether. I also accept Prof Reid’s point that the argument that the known redundancy provides a reason why it was obvious to combine a camera and MEMS device is not sound. The relevant redundancy is to provide a user with different input modalities but this combination does not do that. Finally I do not accept it would be obvious to keep the camera to give the device a mouse-like functionality and add a MEMS device to give the device a tilt based functionality like the Sidewinder Freestyle Pro. That seems to me to be an argument based on hindsight.

407.

Accordingly I reject this obviousness case over Wacom.

408.

Although it is not necessary to do so, I will address the final point that even if it was obvious to combine the MEMS sensor arrangement from the known games controllers such as the Sidewinder Freestyle Pro into a Wacom type device with a camera, it was not obvious to make a system which analysed gestures based on a motion trajectory of the device. I do not accept that argument. The common general knowledge included various games in which sequences of joystick movements had a meaning such as unlocking cheats (e.g. Motocross Madness 2). It would not involve an inventive step to use the relevant game controller produced by this argument with these games. If it had been obvious to make such a games controller then the result without inventive step would be a system within the relevant claims.

409.

In summary I find that claims 1A, 1C and 1D and claims 2 and 5 as granted of 498 and claim 2 of 650 involve an inventive step over Wacom.

Inventive step: Philips application

410.

Nintendo advanced three obviousness cases relating to the Philips application. The first related to the Graffiti system, the second was the same argument as over Wacom based on Black & White, Opera and other mouse based gesture analysis and the third involved adding a MEMS device to the system.

Adding the Graffiti system to the Philips application

411.

Nintendo’s case was as follows. Graffiti was a method of entering written characters into a computer without a keyboard. The Philips application describes the device as being ideal for home shopping or web browsing (p1 ln10). To a skilled team putting this into practice, it would be natural to allow the user to enter alphanumeric data. The hand held device is designed to be used like a TV remote control, for example while the user is sitting on a chair watching the television. A keyboard would be a cumbersome way of entering alphanumeric data for such a user. It would be obvious to provide a Graffiti type interface as a way of allowing the user to enter alphanumeric data. This would be particularly useful for home shopping since home shopping needs some but not much alphanumeric input. Nintendo’s case was supported by the opinion of Prof Steed.

412.

Prof Reid did not agree it was obvious. Nintendo suggested it was not clear why. However Nintendo did not ask the Professor why he did not agree when he said that in cross-examination (T3/416 ln13-19) and I do not regard Nintendo’s submission as a strong point. In any event Prof Reid did explain that he did not agree a keyboard was as poor an option as Nintendo submitted and pointed out that keyboards are used for Smart TV.

413.

Philips submitted that Prof Steed’s suggestion would not be adopted for a number of reasons. The main ones were these. First Graffiti was found in a completely different type of device used for a different purpose. In Graffiti letters are written with a stylus on a touch pad. Second if there was a need to enter alphanumeric data the standard way of doing this was and would be to use an onscreen keyboard which would be easily incorporated into the Philips application by the user pointing at the desired number or letter and clicking. Third, in Graffiti the device (the touch screen) is held still while the stylus is used. In other words Graffiti involves an action akin to writing with a stylus on a fixed pad. Prof Steed’s suggestion is to wave the hand held device in the air. There is no evidence that anyone has ever adopted this approach to entering text either before or since.

414.

I accept Philips’ submissions. They are sound reasons why it was not obvious to add Graffiti to the Philips application and I reject that obviousness case.

415.

An alternative case involved adopting gestures akin to those used in the Opera browser for the Philips application. I do not accept that was obvious either. Prof Steed could not say whether it had ever been done before or since.

Combining Black & White (etc.) with the Philips application

416.

This argument is the same as the one advanced over Wacom based on the Black & White and Harry Potter games and Opera software. However whereas Wacom expressly describes using the hand held device with a personal computer the Philips application is focussed primarily on a system used with a television. The Philips hand held device is a TV remote.

417.

Prof Steed accepted that games such as the Black & White and Harry Potter games relied on could not be played on a TV with a TV remote in 2002. As regards Opera the position is this. Although the Philips application does expressly refer to internet browsing, that is mentioned as an advanced feature on a television. It is not referring to internet browsing on a personal computer. There is no evidence the Opera browsing software worked on a television. Also Prof Steed also accepted that there was no evidence anyone before or since had implemented a TV remote with gestures for actions like go back.

418.

If all that was disclosed in the Philips application was the idea of a television based system then I would not accept Nintendo’s case on this but at p4 ln18-19 of the Philips application the document states that “the invention is particularly suitable for television receivers, monitors, (game) computers and presentation devices”.

419.

In my judgment this is a disclosure that the hand held device can be used with a general purpose personal computer. Alternatively it is a disclosure which means that it would be utterly trivial to apply the teaching in the Philips application as a replacement for a computer mouse on a personal computer. On either basis the position is the same as over Wacom. To combine a personal computer using the hand held device described in the Philips application operating as a replacement computer mouse with games such as Black & White would not have involved an inventive step in 2002.

420.

Therefore I accept this second obviousness case over the Philips application. It has the result that claim 3 of 498 and claims 1, 3 and 6 of 650 lack an inventive step.

Adding a MEMS device to the Philips application

421.

Nintendo submitted it would be obvious to use a MEMS device as the tilt sensor in the Philips application. They were cheap and readily available in 2002. Prof Reid accepted that there were various ways of implementing the tilt sensor in Philips such as a mercury switch or a rolling ball. He also agreed that a MEMS device was an alternative to these albeit it would require circuitry to monitor the tilt angle and apply a threshold and so would be more complicated. Philips also submitted that a MEMS device would be counterproductive because the purpose of the tilt sensor in the Philips application is to save power but the MEMS device needs power to work. Prof Steed accepted that such a sensor would drain power but did not know if the amount would be miniscule or large.

422.

In my judgment it would not involve an inventive step to implement the Philips application using a MEMS device as the tilt sensor. I think the idea would naturally occur to a skilled games system developer in 2002 given that these devices were cheap by that time. I am not satisfied that the skilled person would be put off by the amount of power required. There is no evidence the power required was sufficiently large to put someone off. Nor am I satisfied the implementation would have been sufficiently complex to be discarded as an option.

423.

The system which would be produced would be a system with a hand held device which included both a camera and a MEMS device. However the MEMS device in this context would have been included only in order to activate the camera. This would produce a system which included a motion sensing means but unless it was combined with some relevant software, it would not estimate a motion trajectory or analyse a gesture. I do not accept it would be obvious to think of using Mr Speck’s tipping up and down gesture to activate the device.

424.

Nintendo’s arguments which had the forensic objective of combining motion trajectory estimation or gesture analysis with the Philips application were the ones relating to Graffiti and the Black & White software. I have accepted the latter but not the former. I do not accept that the combination of both changing the tilt sensor into a MEMS device and also combining the device with application software like Black & White (etc.) involved no inventive step. That sort of multiple combination is like the well known step by step Technograph approach, made with hindsight knowledge of the invention as an objective which is unfair to inventors.

425.

Accordingly this third obviousness case does not render claims 1A, 1C or 1D of 498 lacking in inventive step nor claim 1 of 650 (as granted or amended). A difficult question is whether it renders claim 2 of 498 lacking inventive step. One limb of that claims calls for a motion sensing means for calculating a motion trajectory. The MEMS sensor used as a tilt switch is a motion sensing means but without software the system cannot calculate a motion trajectory. The other limb calls for a motion sensing means for sensing a motion. At first sight it seems odd to think that the MEMS sensor is a motion sensing means but is not for sensing a motion but that is the conclusion I have reached. There is no oddity. The point arises from the fact that the MEMS sensor is only being used to replace an activation switch. Without the relevant software to perform the relevant function the device is not within the claim.

426.

In summary claim 3 of 498 and claims 1, 3 and 6 of 650 lack an inventive step over the Philips application.

Inventive step: Sony

427.

There are two aspects to Nintendo’s case on inventive step over Sony. The first is based on the submission that Sony was monitoring the position and orientation of the hand held unit like the bat all the time and so it was obvious to monitor how a stroke was played, in other words it was obvious to analyse a gesture. Of the claims which are novel over Sony, this argument would only be relevant to claim 6 of the 650 patent in that it would involve analysing gestures made with the unit based on the motion trajectory but in a system with only a camera and no other motion sensing means. It would not be relevant to claim 1A, 1C or 1D of the 498 patent nor to claim 2 of the 650 patent because they require a further separate motion sensing means.

428.

Prof Reid accepted that a skilled person could do this in the sense that the signals generated by this system are such that they could be used in this way if a skilled person thought of doing so. However although it could be done, I am not persuaded that it was obvious to take this step over Sony. The idea of a computer game based on analysing gestures made by waving a hand held device in the air was not part of the common general knowledge and I have found it was not disclosed by Sony. The document itself is consistently focussed on using the signals from the camera in a more limited way. I find that claim 6 of the 650 patent involves an inventive step over Sony.

429.

The second aspect of the inventive step case over Sony relates to suggestions in the fifth embodiment that the position detection system is similarly applicable to other games such as golf. Prof Steed’s opinion was that this was straightforward to implement in the following way. The skilled games system developer would envisage a system in which the user swings a golf club shaped unit in order to hit an imaginary ball at their feet. The timing corresponding to the time the table tennis bat hits the virtual ball in the fifth embodiment would be the timing of the low point of the swing of the club as it hits the virtual golf ball at the user’s feet. The user would swing the club and the system would monitor the swing and in particular work out the position and orientation of the face of the club at the low point. That would allow the impact on the ball to be simulated and the ball would be shown flying off into the air on the screen.

430.

The camera on the face of the club would not be useful to monitor the swing because as the club was swung it would not face in the right direction to see the beacons all the time. It would be obvious to use a cheap and readily available inertial MEMS device and incorporate it into the club. After all inertial MEMS devices were part of the common general knowledge. Also Nintendo submitted that the use of an inertial sensor with another sensor to give a frame of reference was conventional.

431.

In cross-examination Prof Reid proposed an alternative approach of adding a switch to the sensor mat which would be hit as the club reached the bottom of the swing. Prof Steed did not regard this as safe and Nintendo submitted it was more complicated and costly than Prof Steed’s inertial sensor idea. Nintendo pointed out that the Sony document positively suggests golf and submitted that Prof Steed’s idea was the most straightforward idea available, was not fanciful and did not involve multiple steps. It was a simple addition to the disclosure to implement an express teaching in the document (golf).

432.

Philips submitted that this element of Nintendo’s case was driven by hindsight because there was nothing in Sony which suggested that the golf game was to be played in a different way from the ping pong. The switch proposal was in line with the teaching of Sony since it provided a trigger, like stamping on the mat in the ping pong smash case, for the system to know when the camera in the club should image the identifiers and allow the system to work out the club’s position and orientation. At this point the camera would be pointing towards the screen.

433.

Philips also pointed out that part of Prof Steed’s reasoning for dismissing the switch suggestion was that it was necessary to determine the whole swing in order to animate an avatar of the player on the screen. However as Philips submitted, Sony does not describe the animation of a player’s avatar.

434.

Philips also submitted that Prof Steed had not considered the possibility of an external camera to monitor the player’s motion as had been used in motion capture systems like the Eye Toy.

435.

I prefer Philips’ submissions on this issue to those of Nintendo’s. I do not think it would have been obvious to implement the golf suggestion in Sony simply by putting a MEMS device into a golf club. To a skilled games system developer reading Sony, what is required is to work out the position and orientation of the club at the single moment in time the club is at its lowest. The reader would see this as related to what is already disclosed in relation to ping pong.

436.

Acting without hindsight I am not convinced it would be obvious to the skilled games system developer how to implement the reference to golf at all. They would try to use a similar approach to that disclosed for ping pong. They might try simply putting a camera on the club and imaging the beacons during part of the swing. They might try using the mat but with some facsimile of a golf ball or golf tee to detect the moment at which the head of the golf club passes the low point of the swing and is therefore contacting the ball. None of that would require the introduction of an inertial sensor in the golf club. They might try using an external camera to identify and characterise the point when the club should hit the virtual ball. If these approaches were impractical I doubt a skilled person would pursue golf with much enthusiasm given that it is simply one of a throw away list of ideas.

437.

If Sony had described the monitoring or analysis of the whole swing of a ping pong bat in order to animate an avatar then I can see that a skilled person might well think of trying to monitor the whole golf swing and they might then think of what to do about the problem that a camera on the club face would not be able to see the beacons. It may be that this way of reading Sony accounts for Prof Steed’s opinion about how to implement the golf game. However I have rejected this interpretation of Sony. It does not disclose estimating a motion trajectory in that sense or analysing a gesture and I do not think that it would be obvious to set out to monitor the whole swing in order to implement golf.

438.

If the skilled games system developer did want to monitor the whole swing then the existing camera is clearly not sufficient for the task. In that case they might think that an external camera would be useful or they might well replace the camera on the club with a MEMS device. However neither step arrives at the claim. To arrive within the claim the skilled person has to keep the camera and add a MEMS sensor as well. There is no example of such a combination in the common general knowledge and I am not satisfied it would be obvious. I do not accept that the fact that there were known arrangements in which an inertial sensor was used with another sensor to give an absolute frame of reference is of much significance. The examples were specific combinations and none of them were of an inertial sensor and a camera.

439.

Thus I find that claims 1A, 1C, 1D as amended and claim 2 as granted of 498 and claims 2 and 6 of 650 involve an inventive step over Sony.

Summary of outcomes on 498 and 650 and impact on double patenting

440.

A summary of the conclusions I have reached in relation to validity of the various claims is as follows:

498 patent

i)

Claim 1 as granted is invalid. It involves added matter (at least one beacon). It lacks novelty over Wacom, the Philips application and Sony.

ii)

Claim 2 as granted is novel over all the cited prior art and involves an inventive step over all the prior art.

iii)

Claim 3 as granted lacks novelty over Sony and lacks inventive step over Wacom and the Philips application.

iv)

Claim 5 as granted is novel over all the cited prior art and involves an inventive step over all the prior art.

v)

Thus as granted 498 is invalid. Claims 2 and 5 are novel and inventive.

vi)

Claim 1A would be invalid as it involves added matter (at least one beacon).

vii)

Claim 1B would be invalid as it lacks novelty over Wacom, the Philips application and Sony.

viii)

Claim 1C would be valid. It cures the added matter. It is novel over all the cited prior art and involves an inventive step. However I will not permit an amendment in this form as it leaves the bracket problem uncured.

ix)

Claim 1D is valid and cures the bracket problem.

x)

Thus subject to double patenting I would permit an amendment to 498 in the form of the fourth conditional amendment (claim 1D). A set of claims in that form is valid.

650 patent

xi)

Claim 1 as granted is invalid. It involves added matter (at least one beacon). It is novel over Wacom and the Philips application but is it not novel over Sony. It lacks inventive step over Wacom and the Philips application.

xii)

Claim 2 as granted is novel over all the cited prior art. It is not obvious over any of Wacom, the Philips application or Sony.

xiii)

Claim 3 as granted is not novel over Sony. It lacks inventive step over Wacom and the Philips application.

xiv)

Claim 6 as granted is novel over all the cited prior art. It would be inventive over Sony but lacks inventive step over Wacom and the Philips application.

xv)

Thus as granted 650 is invalid. Only claim 2 is novel and inventive.

xvi)

Claim 1 as proposed to be amended does not involve added matter. It would be novel over Wacom and the Philips application but it would not be novel over Sony and would lack inventive step over Wacom and the Philips application.

xvii)

Thus subject to double patenting I would allow the amendment to claim 1 to cure the added matter and then make a finding of partial validity of amended 650 on the basis of claim 2.

441.

Therefore (subject to double patenting) I would make amendments and findings of partial validity which produced a result whereby claim 1 of the 498 patent was in the form of claim 1D (the fourth conditional amendment) and the only valid claim of the 650 patent was claim 2 dependent on claim 1 as amended. These two would be the relevant independent claims to consider for the purposes of the double patenting objection. They both cover the Wii but that is not sufficient to engage double patenting on the principles I have identified. The first question is whether they have the same scope for practical purposes.

442.

Claim 1D of 498 is set out above. Claim 2 of 650 dependent on claim 1 as amended has a very similar scope. Both claims are to a user interaction system with the elements of an electrical apparatus, portable pointing device, digital signal processor, room localisation beacons, camera and motion sensing means. They both cover a system which estimates a motion trajectory albeit different words are used. In 498 the claim refers to analysing a gesture on the basis of a motion trajectory whereas in 650 the claim only refers to estimating a motion or motion trajectory. Philips accepted that claim 6 of 650 had the same scope as 498 since it refers to analysing gestures whereas that language is absent from claim 1 of 498.

443.

The two putative independent claims overlap in scope in that claim 1 of 498 would be entirely within the scope of claim 1 of 650. Any difference in wording would not make any real difference to that conclusion.

444.

The difference in scope between these two putative independent claims is illustrated by the impact of the Sony prior art. Sony deprived claim 3 of 498 of novelty since that claim did refer to estimating a motion trajectory but was not limited to analysing gestures whereas it did not deprive claim 6 of 650 of novelty since that claim is limited to analysing gestures. Accordingly although the two putative independent claims overlap there is a material difference in their scope. I find they are not claims to the same invention.

445.

Nevertheless I will consider the question of legitimate interest in case I am wrong that there is no double patenting when claims merely overlap (c.f. Marley’s Roof Tile). Philips submitted that it had a legitimate interest in obtaining the amendments sought to the 498 patent. The nature of that interest was the earlier date of publication of the application for it (14th September 2005) than the date relevant to the 650 application (28th August 2009). Nintendo does not concede any claims of the 498 application were infringed and that point, if it cannot be agreed in the light of this judgment, will have to be dealt with on the damages inquiry. Nevertheless on the hypothesis that the Wii infringes the relevant claims, Philips will obtain a greater sum in damages for past sales under the 498 patent as amended than the 650 patent because the date on which that assessment will start will be earlier for 498 than it would be for 650. This is the result of s69 of the Act (Art 67 EPC). Four years worth of damages are at stake.

446.

As a fall back Philips submitted that they would amend 650 to remove claim 6 in order to avoid double patenting. As I have considered the issue that would not help.

447.

Nintendo did not disagree with Philips’ analysis about the effect on four years damages of the refusal to permit the amendments to 498 but submitted this was not a legitimate interest such as would be permitted by the EPO. Nintendo also submitted that if this was a legitimate interest then it would be true in every case of divisional patents since their application publication dates will always differ.

448.

I have not been shown any decision of the EPO in which the point taken by Philips in this case has been considered in this context. I think Nintendo is right that an argument of the same kind could be made in every case but it is not every case in which four years of very substantial damages are at stake. It may be that the EPO has not been confronted with this argument before in part because the EPO is not concerned with infringement (although I recognise that that does not prevent the point being put).

449.

In my judgment Philips has an entirely legitimate interest in making the amendment sought to the 498 patent. However if double patenting was a problem as between the two sets of claims (which I have held it is not) then if I allowed the amendment to 498, I am not aware that Philips would have had any legitimate interest in making the validating amendment they need for the 650 patent to avoid its being revoked altogether. All that would achieve would be to create the double patenting problem but happily this point does not arise.

Reflection on 498 and 650

450.

The cases in relation to 498 and 650 are complex and intricate. Having reached the end I have paused to reflect on them. In fact the major issues are not complicated and my essential conclusions can be expressed as follows:

i)

The patent application which led to the 498 patent discloses a computer system with a hand held pointing device which has both a camera and a physical motion sensor such as an accelerometer. It is used to give hand waving gesture commands to a fixed unit. The gesture analysis is based on the motion trajectory of the device. The system uses room localisation beacons (plural) but not necessarily three beacons. One application of this combination which is described is to use it for playing games.

ii)

No prior art discloses that combination. Wacom has no physical motion sensor and no gesture analysis. The Philips application describes a pointing device with a camera but the idea of using it for hand waving gestures is not disclosed. The device can have a tilt sensor but that was just an activation switch. Sony is a game system with a hand held unit which used a camera but at best it is a near miss. It does not disclose the idea of monitoring the swing. It has no physical motion sensor.

iii)

For inventive step, none of the cited prior art leads naturally to the claimed combination and pointers to it are not in the common general knowledge. The common general knowledge did not include a device combining a physical motion sensor with a camera and the reasons advanced by Nintendo for putting those two sensors together in one unit are unconvincing. Also the common general knowledge did not include any game based on analysing hand waving gestures. The gesture examples relied on are two dimensional mouse gestures (like Graffiti or Black & White) or contrived (like the joystick cheat codes).

iv)

Both patents were granted with broader claims but those broader claims are invalid in various ways. However a claim to the combination described is not added matter because it is disclosed in the patent application. The Nintendo Wii system set up with Wii Tennis is an example of that combination and therefore infringes the claim.

Conclusion

451.

I find that the 484 patent is invalid.

452.

I find that the 498 and 650 patents are invalid as granted but valid as amended. They are infringed.

Koninklijke Philips Electronics NV v Nintendo of Europe GmbH

[2014] EWHC 1959 (Pat)

Download options

Download this judgment as a PDF (2.0 MB)

The original format of the judgment as handed down by the court, for printing and downloading.

Download this judgment as XML

The judgment in machine-readable LegalDocML format for developers, data scientists and researchers.