16 Design Forces for software-defined GNSS receivers
A GNSS receiver is a complex device which performance is affected by a wide range of internal and external factors. To the best of the authors’ knowledge, the first formal effort to define testing procedures for GPS receivers is found in the paper by Teasley1, a work that anticipated the key concepts of the Standard 101 published by the Institute of Navigation in 19972. Such procedures have been widely accepted by the GNSS industry and, two decades later, world-class testing firms are still referencing them in their white papers. In summary, the set of those proposed testing procedures measures receiver’s sensitivity in acquisition and tracking, diverse time-to-first-fix and reacquisition times, static and dynamic location accuracy, and robustness to multipath and radio frequency interferences.
The very nature of software-defined radio technology requires a broader approach. A GNSS receiver in which the baseband processing chain is implemented in software and executed by a general-purpose processor in a computer system has other design forces equally important and clue for real impact and to reach technical, market, and social success, but they are usually not captured by traditional GNSS testing procedures and quality metrics.
Key Performance Indicators (KPIs) are goals or targets that measure how well a given activity is doing on achieving its overall operational objectives or critical success factors. KPIs must be objectively defined in order to provide a quantifiable and measurable indication of the product or service development progress towards achieving its goals.
S.M.A.R.T. is an acronym mentioned for the first time in 19813, and it is usually referred to when identifying and defining KPIs, in order to remind their desirable features:
- S.M.A.R.T. definition:
-
- Specific: Is this KPI too broad, or is it clearly defined and identified?
-
- Measurable: Can the measure be easily quantified?
-
- Attainable: Is it realistic for us to obtain this measure within the given project framework? Can we take the appropriate measures to implement this KPI and see changes?
-
- Realistic: Is our measure practical and pragmatic?
-
- Timely: How often are we going to be able to look at data for its measure?
Hence, KPIs are not universal but based on the very single project, product or service in which they are going to be applied. This page suggests a wide list of indicators derived from a list of Design Forces, defined below, to be considered when assessing the quality of a software-defined GNSS receiver. Its degree of S.M.A.R.T.-ness in every particular context may vary.
The design of a GNSS software-defined receiver needs to resolve some design forces that could appear as antithetical, (e.g., portability vs. efficiency, openness vs. marketable product), and a “sweet spot” must be identified according to the targeted user and application. Hereafter, we identify 16 dimensions in which the performance and features of a software-defined GNSS receiver can be evaluated4. Click on their names to see a discussion of the concept and some possible metrics, indicators, and checkpoints:
1.- AccuracyHow close a Position-Velocity-Time (PVT) solution is to the true position. 2.- AvailabilityThe degree to which a system, subsystem or equipment is in a specified operable and committable state at the (random) start of a mission. 3.- EfficiencyHow fast the software receiver can process the incoming signal, and in particular how many channels it can sustain in parallel. 4.- FlexibilityThe ability of a system to respond to potential internal or external changes affecting its value delivery, in a timely and cost-effective manner. 5.- InteroperabilityThe ability of making systems work together. 6.- MaintainabilityThe ease with which a product can be maintained in order to isolate and correct defects and cope with a changing environment. 7.- MarketabilityA measure of the ability of a security to be bought and sold. 8.- PortabilityIt refers to the usability of the same software in different environments. 9.- PopularityIt is a complex social phenomenon with no agreed upon definition. It can be defined in terms of liking, attraction, dominance, or just being trendy. 10.- ReliabilityThe ability of a system or component to function under stated conditions for a specified period of time. 11.- RepeatabilityHow close a position solution is to the mean of all the obtained solutions. It is related to the spread of a measure, also referred to as precision. 12.- ReproducibilityThe ability of an entire experiment or study to be reproduced, either by the researcher or by someone else working independently. 13.- ScalabilityThe ability of the software to handle a growing amount of work in a capable manner, or its ability to be enlarged to accommodate that growth. 14.- TestabilityThe degree to which a software artifact supports testing in a given test context. 15.- OpennessThe degree to which something is accessible to be viewed, modified, distributed and used. 16.- UsabilityThe degree to which a software can be used by specified consumers to achieve quantified objectives with efficiency and satisfaction in a given context of use. |
References
-
J. B. S. Teasley, Summary of the initial GPS Test Standards Document: ION STD-101, in Proc. of 8th International Technical Meeting of the Satellite Division of The Institute of Navigation, Palm Springs, CA, Sep. 1995, pp. 1645–1653. ↩
-
Institute of Navigation, ION STD 101 recommended test procedures for GPS receivers. Revision C, Manassas, VA, 1997. ↩
-
G. Doran, There’s a S.M.A.R.T. way to write management’s goals and objectives, Management Review, vol. 70, no. 11, pp. 35–36, 1981. ↩
-
C. Fernández-Prades, J. Arribas, P. Closas, Assessment of software-defined GNSS receivers, in Proc. of Navitec 2016, ESA-ESTEC, Noordwijk, The Netherlands, 14-16 Dec. 2016, pp. 1-9. ↩
Leave a comment