OHIM opposition quality standards - the Office responds

A little while back, the IPKat focused on the quality checking exercised by the Office for Harmonisation in the Internal Market (OHIM) in respect of its opposition decisions. He thought that the exercise was a good one, but was alarmed that virtually one opposition decision out of every five appeared to be potentially defective (though not necessarily wrong).

Right: intent on finding ways of spending its budgetary surplus, OHIM examiners are testing ways of making sure that Community trade marks are properly affixed to the goods for which they are registered ...

The November issue of OHIM's informative Alicante News (pdf here), carries a response on this topic -- as Vincent O’Reilly (Director, Department for Industrial Property Policy) promised. The text reads:

Quality checking of opposition decisions at OHIM
Last month the results of the OHIM quality check of opposition decisions made in the third quarter of the year were published. These attracted some comment in the professional community, including in the IPKAT blog, making this a good moment to give more details on the quality checking system we have been operating for some time.

The quality of decisions made in OHIM has always been an issue that has been accorded importance by the Office. In recent years increased efforts have been made to ensure that the measurement and improvement of the quality of decisions is more systematic and transparent both within the Office and to the outside world. The feedback that the checking provides to those involved in the decision making process is probably the most valuable output that the quality check programme produces.

Starting early in 2007 the Office established a system of reviewing decisions taken based on statistically significant random samples. The scope of the check covers not only opposition decisions but also the classification of lists of goods and services and decisions on absolute grounds for refusal (acceptances and refusals). The check was carried out weekly by a group consisting principally of legal advisors.

The target rate of error free decisions under each heading was published, as were the standards to be applied. The results of the check have been published on a quarterly basis.

All the information in respect of this programme is available at:
http://oami.europa.eu/ows/rw/resource/documents/QPLUS/serviceCharter/
qualityofdecisions_en.pdf

The Targets
For 2008 the Office decided on quality objectives for classification of lists of goods and services, absolute grounds for refusal as well as opposition. The target for oppositions was to produce at least 95% of decisions free from any error.

The Standards (what constitutes an error free opposition decision?)
A decision is error free when three aspects, namely, format, content and the outcome are all in accordance with the standards the Office has established. The format of an opposition decision is correct only if, among other things, it: clearly identifies the mark and the goods and services concerned and the parties. It must also summarise the relevant points made by the parties. In order for the content to be correct there must, among other requirements, be a correct comparison of the signs and goods and services. The whole content must be expressed in clear language.

The outcome of a decision must be in line with the Guidelines adopted by the Office. An incorrect outcome is where the decision wrongly either (a) upholds the opposition in respect of some or all of the goods and services concerned or (b) rejects the opposition in respect of some or all of the goods and services concerned.

Some commentators question whether including issues such as format and content
represent too great an emphasis on form over substance. The Office does not see it that way. The outcome is, of course, important but the Office aims to have a high standard in all aspects of the decisions being made In December last year the Office convened an external panel to review the quality checking system that had been established. The panel was generally satisfied with the approach of the Office but also made some suggestions, including a recommendation that errors be categorised for greater transparency
.
After giving a tabulated set of results for the year so far, the piece continues:


The checking process
The cases to be examined each week are selected automatically by a computer program. The number of cases and the program together ensure that the sample is sufficient to guarantee that the results on a quarterly basis can be relied as being representative of the overall output of decisions. The checks themselves are carried out by a group of four highly qualified and experienced staff. They individually examine the cases assigned to them before discussing those cases together where possible errors are detected.

Feedback
The results of the weekly check are provided in writing directly to the examiners concerned as well as to their legal advisers and team leaders. In addition to the individual feedback, the process allows the group carrying out the check to identify
measures the Office needs to take to assist examiners in carrying out their decision making tasks. The measures include: coaching for individuals; general training on specific topics; improvement in standard letters and decision templates; clarification of Guidelines through amendments to the Manual of Trade Mark Practice.

Assessment of results in 2008
The Office is disappointed that it continues to fail to reach its targets. The appropriate response is not to reduce the targets or dilute the self imposed standards that have been established. The Office will redouble its efforts to ensure that the measures taken as a result of feedback from the exercise have a greater impact on improving quality. The Office is also looking to see what additional measures can be implemented to underpin the process of delivering quality output. Among other things, a tool to increase clarity and consistency concerning similarity of goods and services will be available in the coming year to help address one of the important areas of errors.
The IPKat is pleased that OHIM is willing to share information concerning its quality monitoring with its users, though he adds that, however much information is given, it is in the nature of things that paying users of the OHIM will always want more. Merpel says, it's a good sign when weblogs -- once thought of as somewhat scurrilous and subsersive organs of the media -- are able to participate in debates concerning matters of interest to the IP community of which they are part. The quality checking of this blog is largely done by its readers, who are quick to pounce on errors (real or imagined) and to suggest ways of improving the service, for which thanks are sincerely due.

Both Kats wonder how other offices perform quality checks on oppositions, both in the field of trade marks and in patents. Is it time to generate a set of objective criteria and methodology, rather than leaving it for each authority to determine for itself what its quality standards should be? And is this something that offices can sort out among themselves -- or should the initiative be backed by WIPO?