A GNSS receiver is a complex device which performance is affected by a wide range of internal and external factors. To the best of the authors’ knowledge, the first formal effort to define testing procedures for GPS receivers is found in the paper by Teasley1, a work that anticipated the key concepts of the Standard 101 published by the Institute of Navigation in 19972. Such procedures have been widely accepted by the GNSS industry and, two decades later, world-class testing firms are still referencing them in their white papers. In summary, the set of those proposed testing procedures measures receiver’s sensitivity in acquisition and tracking, diverse time-to-first-fix and reacquisition times, static and dynamic location accuracy, and robustness to multipath and radio frequency interferences.

The very nature of software-defined radio technology requires a broader approach. A GNSS receiver in which the baseband processing chain is implemented in software and executed by a general-purpose processor in a computer system has other design forces equally important and clue for real impact and to reach technical, market, and social success, but they are usually not captured by traditional GNSS testing procedures and quality metrics.

Key Performance Indicators (KPIs) are goals or targets that measure how well a given activity is doing on achieving its overall operational objectives or critical success factors. KPIs must be objectively defined in order to provide a quantifiable and measurable indication of the product or service development progress towards achieving its goals.

S.M.A.R.T. is an acronym mentioned for the first time in 19813, and it is usually referred to when identifying and defining KPIs, in order to remind their desirable features:

S.M.A.R.T. definition:
  • Specific: Is this KPI too broad, or is it clearly defined and identified?
  • Measurable: Can the measure be easily quantified?
  • Attainable: Is it realistic for us to obtain this measure within the given project framework? Can we take the appropriate measures to implement this KPI and see changes?
  • Realistic: Is our measure practical and pragmatic?
  • Timely: How often are we going to be able to look at data for its measure?

Hence, KPIs are not universal but based on the very single project, product or service in which they are going to be applied. This page suggests a wide list of indicators derived from a list of Design Forces, defined below, to be considered when assessing the quality of a software-defined GNSS receiver. Its degree of S.M.A.R.T.-ness in every particular context may vary.

The design of a GNSS software-defined receiver needs to resolve some design forces that could appear as antithetical, (e.g., portability vs. efficiency, openness vs. marketable product), and a “sweet spot” must be identified according to the targeted user and application. Hereafter, we identify 16 dimensions in which the performance and features of a software-defined GNSS receiver can be evaluated4. Click on their names to see a discussion of the concept and some possible metrics, indicators, and checkpoints:

1.- Accuracy

How close a Position-Velocity-Time (PVT) solution is to the true position.

2.- Availability

The degree to which a system, subsystem or equipment is in a specified operable and committable state at the (random) start of a mission.

3.- Efficiency

How fast the software receiver can process the incoming signal, and in particular how many channels it can sustain in parallel.

4.- Flexibility

The ability of a system to respond to potential internal or external changes affecting its value delivery, in a timely and cost-effective manner.

6.- Maintainability

The ease with which a product can be maintained in order to isolate and correct defects and cope with a changing environment.

8.- Portability

It refers to the usability of the same software in different environments.

9.- Popularity

It is a complex social phenomenon with no agreed upon definition. It can be defined in terms of liking, attraction, dominance, or just being trendy.

10.- Reliability

The ability of a system or component to function under stated conditions for a specified period of time.

11.- Repeatability

How close a position solution is to the mean of all the obtained solutions. It is related to the spread of a measure, also referred to as precision.

12.- Reproducibility

The ability of an entire experiment or study to be reproduced, either by the researcher or by someone else working independently.

13.- Scalability

The ability of the software to handle a growing amount of work in a capable manner, or its ability to be enlarged to accommodate that growth.

14.- Testability

The degree to which a software artifact supports testing in a given test context.

15.- Openness

The degree to which something is accessible to be viewed, modified, distributed and used.

16.- Usability

The degree to which a software can be used by specified consumers to achieve quantified objectives with efficiency and satisfaction in a given context of use.


References

  1. J. B. S. Teasley, Summary of the initial GPS Test Standards Document: ION STD-101, in Proc. of 8th International Technical Meeting of the Satellite Division of The Institute of Navigation, Palm Springs, CA, Sep. 1995, pp. 1645–1653. 

  2. Institute of Navigation, ION STD 101 recommended test procedures for GPS receivers. Revision C, Manassas, VA, 1997. 

  3. G. Doran, There’s a S.M.A.R.T. way to write management’s goals and objectives, Management Review, vol. 70, no. 11, pp. 35–36, 1981. 

  4. C. Fernández-Prades, J. Arribas, P. Closas, Assessment of software-defined GNSS receivers, in Proc. of Navitec 2016, ESA-ESTEC, Noordwijk, The Netherlands, 14-16 Dec. 2016, pp. 1-9. 

Updated:

Leave a comment