Research and Advances
Computing Applications Review articles

The Next Generation of GPS Navigation Systems

A live-view GPS navigation system ensures that a user's cognitive map is consistent with the navigation map.
Posted
new-generation GPS navigation system
  1. Introduction
  2. Key Insights
  3. The Rationality and Novelty of the Proposed System
  4. The Architecture
  5. Application System Prototyping
  6. Experiments and User Study
  7. Conclusion
  8. References
  9. Authors
  10. Figures
  11. Tables
new-generation GPS navigation system

Navigational information provided by a conventional global positioning system (GPS) navigation system is not adequately intuitive to a user. Moreover, the environment often affects the accuracy of a hardware sensor in a GPS navigation system, often causing users to become disoriented while navigating. This scenario illustrates how the navigation map provided by the conventional GPS system is not always consistent with a user’s cognitive map.

Back to Top

Key Insights

  • While current GPS navigation uses a 2D digital map, a simulated bird’s-eye view, and synthetic 3D pictures to navigate the user, this article presents an innovative approach using an augmented reality interface for providing information tightly matched with the user’s cognitive map.
  • Live-view navigation is the trend and will become the standardized interface on the windshield of each car in the next century.
  • A copious amount of user behavior is hidden within GPS raw data and is neglected by GPS navigators on the market. This article presents an algorithm to explore this implicit data to provide personal navigational experience.

The navigation map1 refers to navigational guidance with its related geographical information derived from conventional paper charts or electronic maps. developed by Tolman in 1948,13 the cognitive map refers to the ability of the human subconscious to accumulate and construct spatial and temporal information. This ability imparts the type of mental processing through a series of psychological transformations. Consequently, an individual can acquire, code, store, recall, and decode information regarding the relative locations and attributes of phenomena in their daily or metaphorical spatial environment. Namely, a cognitive map allows humans to sense and identify the environment where they reside. This map is derived from the accumulation of their familiarity and perceptions while wandering around their living spaces. Hence, a navigation system largely depends on the consistency of the navigation map and cognitive map in terms of performance.

Figure 1 sketches the continuum of the GPS system development from the standpoint of the navigational user interface. The figure also illustrates the history of navigation systems regarding the user’s mental cognition—past and present. This continuum has two implications. First, the view of a discrete navigation continuum exhibits the evolution of GPS navigation, along with technology progress, from the simple 2D electronic map, simulated 3D electronic map with synthetic view, to the live-view navigation. However, the perspective of continuous navigation continuum reveals the efforts of providing sufficient navigational information from the previous simple 2D electronic map to a simulated 3D electronic map accompanied by an emulated land-scape image. Although attempting to provide users with more intuitive information by multi-windows of a simulated scene, the latest navigation systems fail to solve the mental rotation problem and even further increases the users’ mental burden.

Consequently, by extending the results of previous cognitive map studies, this article presents the next generation of GPS navigation systems; that is, live-view GPS navigation (LVN), by adopting context-aware technology to eliminate hardware sensor signal errors and resolve the inconsistency problem between cognitive and navigation maps. Namely, after retrieving live video via a camera sensor, the proposed system integrates navigation information and live video on the same screen to merge users’ navigation map and cognitive map. Reflecting on the continuum depicted in Figure 1, the proposed LVN system is the closest to the user’s cognitive awareness.

Back to Top

The Rationality and Novelty of the Proposed System

The previous studies on factors impacting navigational performance can be summarized as personal capability and environmental characteristics.4 Personal capability is related to the user’s cognitive map that is dominated by age, gender, and environmental familiarity. Meanwhile, environmental characteristics refer to the street map of the surrounding environment. Hart7 indicated that a cognitive map is a user’s mental space in geographical recognition and has an egocentric orientation. If the cognitive map can be understood from user’s behavior, the personal capability of a user can be enhanced by using augmented reality (AR) tag environmental characteristics on the navigation display.

Context-aware technology extracts context data of users to perceive their intention to provide a proper service. As a user’s surrounding information is perceived by hardware sensors, the context data represents all related information of the user’s statuses, including identification, spatial information, time, and activities. The greatest challenge of designing such a context-aware system is the complexity of collecting, extracting, and interpreting contextual data. Context-aware technology attempts to make the system “intelligent” and reduce the user burden with respect to mental space.

A LVN system adopts context-aware technology to sense a user’s mental space and then utilizes the AR approach to provide intuitive navigational information. The features of the LVN system can be summarized as follows.

Personalized GPS positioning method: A LVN system exploits habitual behavior to further calibrate GPS data. According to context-aware scheme, the system first extracts feature data from received GPS data. A user’s motion status is then perceived from feature data based on Newton’s law of inertia. The drifted GPS data can then be amended based on the perceived motion status.

Egocentric navigation approach: With the assistance of the perceived motion status from received GPS data, the system further adopts the context-aware scheme to predict the user’s mental status when they are not following a planned route. In other words, when users deviate from a planned route, the system can determine if they are lost or simply chose not to follow the planned route.

Intuitive user interface: The interface design of the conventional GPS navigation system is still trapped within the concept of a paper map. Although commercially available GPS navigation systems adopt a bird’s-eye view to render a map and provide a simulated interchange or important junction snapshot image,5 they cannot match the user’s intuition on the directional guidance. Conversely, the LVN system uses the AR scheme to integrate a directional arrow with live video in order to provide intuitive navigational information. In this manner, a user’s cognitive map can tightly couple with navigational information.

In contrast with current AR navigation research, a LVN system attempts to detect a user’s cognitive map from the received GPS data and provide proper navigation information. Namely, a LVN system adopts context-aware and AR approaches to provide a navigational approach to ensure users’ cognitive maps are consistent with their navigation map. Therefore, the LVN system is a means of providing a personalized and intuitive GPS navigational experience.

Back to Top

The Architecture

Figure 2 illustrates the architecture of a LVN system based on the features discussed here. Its core engine comprises a position-aware service to amend GPS drift data according to the habitual behavior of users, an orientation-aware service to eliminate digital compass accumulated errors by a simple image process and a waypoint-aware service to detect the users’ mental status from their perceived motion status. Finally, results of the waypoint-aware service are input into an AR render module to produce an appropriate arrow sign on live video from a video camera. Here, we discuss each service in detail.

Position-aware service focuses mainly on computing a user’s actual position based on the received GPS data. The process begins with acquiring the user’s moving status from the received GPS data. Understanding the user’s movements allows this service to further detect whether an environmental phenomena error, multi-path error, or even a receiver error influences the position data received from a GPS sensor. While most works14 on calibrating received GPS data focus on integrating auxiliary hardware, information available around users and their practical requirements are always carelessly neglected. Context-aware technology stresses how to comprehend a user’s actual requirements and provide the appropriate services based on data retrieved from sensors. The position-aware service infers a user’s reasonable moving status by interpreting GPS data promptly. Moreover, the user’s accurate position can be obtained by amending the GPS position error based on the perceived moving states. Hence, this position-aware service is also referred to as the perceptive GPS (PGPS) method.8

To obtain the steady habitual behavior of users and subsequently gain the accurate position information, PGPS8 divides the context-aware approach into the following four consecutive phases, that is, feature extraction, state classification, perception, and amendment (Figure 3). The feature extraction phase extracts user movement feature data from the received raw signals of a GPS receiver; for example, speed over ground (SOG), position, and course information. This feature data includes user’s displacement, velocity difference, and the course difference between two consecutive GPS raw data. Based on the momentum change, the state classification phase then uses feature data to classify a user’s behavior into one of following seven states: stationary, linear cruise, linear acceleration, linear deceleration, veering cruise, veering acceleration, and veering deceleration. The classified state is called the current state (CS). During the offline stage, these classification results are further analyzed by applying Newton’s laws of motion to compute the steady state transition probability matrix (TPM) of the user. In addition, TPM will not be the same if the user’s habitual behavior changed along with different environments. In other words, each momentum range should have its corresponding TPM. Hence, the perception phase utilizes this TPM to forecast the most likely current status of the user and calls it the perceived state (PS). Finally, the amendment phase fine-tunes the GPS position and course data based on the difference between PS and CS. Positive results from PGPS experiment confirm that, under environmental interference impacts, position-aware service can successfully perceive a user’s habitual behavior from the TPM to effectively amend the drift error and derive accurate position data.


A LVN system adopts context-aware technology to sense a user’s mental space and then utilizes the AR approach to provide intuitive navigational information.


Therefore, the algorithm of position-aware service is implemented as follows:

  1. Compute the feature data from online received GPS data;
  2. Classify the current state, such as CS, from feature data;
  3. Predict the current state, such as PS, by P(Si | TPM, Si − 2, Si − 1) = arg max { cacm5503_a.gif i − 2,i − 1,i cacm5503_b.gif i} with a stable transition matrix TPM( cacm5503_a.gif i,j,k) and two preceding reasonable states Si − 2 and Si − 1;
  4. Compare PS with CS. If PS and CS are the same, do nothing. Otherwise, the posture counter is increased by one to detect the likelihood of state change of the GPS receiver;
  5. If the posture counter is less than predefined threshold value and the likelihood of PS is nonzero, then GPS drift error is detected. That is, CS is incorrect and PS is assigned to CS. The online received GPS data are also modified accordingly; and
  6. Otherwise, reorganize the GPS receiver as having already changed its state. Hence, the CS value is validated and reset the posture counter.

Orientation-aware service focuses on inferring the orientation of the mobile device embedded with various orientation sensors, including digital compass, GPS receiver, and CMOS sensor. A digital compass utilizes the Hall Effect via the Lorentz Force to estimate the change in magnetic force and subsequently infer orientation data. However, iron materials or any magnetic fields near the digital compass interfere with its local magnetic field, subsequently incurring errors of estimated orientation. Based on the psychology of human motion, humans concentrate on their heading directions when their moving speed is above a certain number. Therefore, under this circumstance, the course of ground (COG) data from the GPS receiver can be used to cross reference with the orientation data of the digital compass in order to detect magnetic interference and subsequently calculate the precise orientation information.

However, GPS data has significant drift errors when the GPS receiver is moving at minimal or zero speed, the COG data becomes unreliable to cross check with the digital compass data at this moment. Furthermore, humans attempt to observe their surroundings when they are standing still or moving below a certain speed. Indeed, a CMOS sensor that is commonly integrated in current mobile devices can be used as an alternative source to cross reference with the data from the digital compass when users are moving at a minimal speed or standing still. In other words, the optical flow data of consecutive images from a CMOS sensor can be used to cross reference with a digital compass to derive the accurate orientation information. Figure 4 illustrates the flow of orientation-aware service.


The navigational interface on the smartphone must be carefully designed to provide an intuitive human-centric navigation experience.


Conventional approaches of the image process require intensive computing power, which is inapplicable to the limited resources of mobile devices. Wagner3 proposed a natural pose algorithm, which is an improvement of SIFT10 and Ferns,12 for use on smartphones. Although possessing an excellent image feature points tracking capability, the natural pose algorithm can only estimate the posture of the smartphone from pretrained images. The latest PTAM9 approach relaxes this constraint by enabling the smartphone to calculate its relative posture in an unknown environment. PTAM9 also demonstrated that tracking known feature points of the previous consecutive images on the current image can be expedited via optical flow technology. Namely, no feature detection of the entire image and feature mapping between two consecutive images is required. Moreover, the rotation angle between two consecutive images can then be estimated by optical flow analysis. Since the orientation-aware service only uses the relative rotation angle from the optical flow to cross reference with data from the digital compass, no feature data detection of the image is necessary. The simple optical flow equation

ueq01.gif

from Nakayama11 can be used to compute the relative rotation angle of the smartphone. Within this equation, β denotes the relative rotation angle from the heading, V0 refers to the velocity of the user who holding the smartphone and d is the relative motion distance. Thus, regardless of whether the users move at a low-speed or standing- still status, a orientation-aware service can identify the accumulated errors of a digital compass to infer their precise heading or facing information. Given the current reasonable state, speed over ground and COG of a GPS receiver from the position-aware service, the algorithm of orientation-aware service is as follows:

  1. When the GPS receiver moves above a certain speed, the course over ground (COG) from the position-aware service can be used to cross reference with the orientation data of the digital compass in order to calculate the precise orientation information.
  2. Otherwise, the GPS receiver is standing still or moving at minimal speed; the optical flow equation

ueq02.gif

is used to estimate the relative rotation angle from consecutive images to cross check with the digital compass data in order to derive the valid orientation information.

Waypoint-Aware Service. Whenever the LVN system detecting the user is not on the planned route, the waypoint-aware service is then activated to analyze users’ moving intentions. When not following the planned route, users might be lost or they might be quite familiar with the region and have their own preferred path. Unaware of the user intentions, some conventional GPS navigation systems provide U-turn guidance as the prime solution when the users are not following a planned route. A human-centric GPS navigation system should be able to tell if the user is actually lost or not and reschedule the navigation path as soon as possible based on the user’s intentions. The waypoint-aware service is such a technique that uses the CS and PS from the position-aware service to predict if the user is lost or has deviated the route on purpose.

To formalize the user intention prediction process, the waypoint-aware service first defines the situation of users following a planned route as users in the Following state. When the users begin to stray from the planned route, Crampton2 defined this scenario as users entering the Unknown lost state as shown in Figure 5(a). Unaware of the users’ intentions, Crampton cannot determine whether users are not following the route intentionally or they are lost. If users are not following the path for a period of time, say [t0, t2] as illustrated in Figure 5(a), Crampton then assumes users are transitioning into Known lost state. This approach is a de facto standard adopted by the GPS navigation system today. A U-turn sign is constantly prompted while the users are in Unknown lost state. When this period expires and users still do not return to the planned route, the system then replans a new route for the users. Given the lack of a clear rule on the length of such a period, this paradigm occasionally makes the users anxious when they are totally unfamiliar with the route. To resolve this problem, waypoint-aware service further divides Crampton’s Known lost state into Known lost and Deviate-but-confidence states as depicted in Figure 5(b).

The determination of whether a user is in the Known lost state or the Deviate-but-confidence state forms the core of the waypoint-aware algorithm to provide human-centric navigating service. According to the psychology of human motion, users will subconsciously change their moving state by abruptly decreasing their speed to have adequate time to observe their surroundings when they become lost. Conversely, when confident in their movements, users will continue their current state with the same movement and disregard warning messages. Hence, this psychological feature can be used to forecast users’ motion intentions. Once users are not following the planned route, the waypoint-aware algorithm is then activated to compute if users have any sudden state changes. If PS by position-aware service remains the same as CS, users are then aware in the Deviate-but-confidence state. Otherwise, they are in Known lost state. With the help of detecting state change, we can shorten the period of the Unknown state, say [t0, t1] in Figure 5(b), and reschedule a new route as soon as possible if required. Hence, the waypoint-aware algorithm can be outlined as follows:

  1. Use the amended position and waypoint list to determine whether the user is in the planned route or not;
  2. If not, query TPM about transition probabilities of PS and CS, respectively, and compute their difference. Otherwise, do nothing.
  3. If the difference is larger than a predefined threshold, BETA, increase the unknown_lost_count by one. Otherwise, reset the unknown_lost_count; and
  4. If the unknown_lost_count is greater than a threshold value, DELTA, assume that the user is definitely lost and replan a new route from the current position.

From this algorithm, it is clear that parameters BETA and DELTA are key factors to shorten the period of the Unknown lost state. Since the value of BETA is used to determine if the transition probability of PS is significantly different from that of CS to detect abrupt change of two consecutive states, it is highly related to the user habitual behavior. Because the LVN system uses TPM to record user habitual behavior, the value of BETA can be computed from TPM accordingly. Let TPM| cacm5503_a.gif ijk|7×7×7 and βxyz = | cacm5503_a.gif ijk cacm5503_a.gif lmn| cacm5503_b.gif 0 ≤ i, j, k, l, m, n ≤ 6 with il, jm, kn, we can compute A = max {βxyz} and B = max {βxyz}. It is obvious that A ≤ BETA ≤ B. Hence, BETA is heuristically defined (A+B)/2 to capture the average cases of abrupt change between two consecutive states. As to the parameter DELTA, it should be smaller than the period of the Unknown state—four seconds in most cases—set by a conventional GPS navigating system. Hence, DELTA can heuristically choose 2 due to the fact the frequency of GPS receiver receiving message is 1Hz.

Augmented Navigation Service. Finally, the LVN system adopts AR service to precisely tag the directional sign on the captured live video to provide intuitive navigational services. During the navigation, the system promptly registers a GPS receiver to the physical environment first by using the positional data from the position-aware service and the orientation information from the orientation-aware service. This registration process confirms the spatial relationship between users and their surrounding environment. According to the derived heading waypoint and embrace angle θ from the waypoint-aware service, the arrow sign computation algorithm calculates an appropriate directional arrow sign to guide the user. Via this human-centric interactive directional sign, the system provides an intuitive user interface that enables users to have adequate time to match their cognitive maps with navigation maps.

Furthermore, the arrow sign computation algorithm uses a G-sensor to detect the posture of the user holding the smartphone and draw the skyline as well as the directional sign relative to the screen accordingly. The algorithm is implemented as follows:

  1. Calculate the distance from the current position to user’s heading waypoint;
  2. If this distance is beyond a user’s visible range and within a smartphone’s FOV, draw a straight arrow on the screen that points to the heading waypoint;
  3. If this distance is within a user’s visible range as well as a smartphone’s FOV, draw a curvy directional sign that matches the user’s cognitive map based on the skyline relative to the screen, the heading waypoint, and the embrace angle θ. Notably, angle θ is computed by the waypoint-aware service; and
  4. Otherwise, draw on the screen a U-turn sign or an arrow sign to the extreme left or right, depending on the embrace angle θ.

In sum, with the assistance of angle θ from the waypoint-aware service, the directional arrow becomes a label that dynamically points to the heading waypoint in order to provide an intuitive navigational experience.

Back to Top

Application System Prototyping

Effectiveness of the LVN system is demonstrated by developing a prototyping system as shown in Figure 6. This system comprises an Android-powered smartphone, a waypoint server, and a publicly available Google map server. The waypoint server acts similar to a gateway server between the smartphone and the Google Map server. When the smartphone transmits a user’s current position and destination to the waypoint server through a 3G network, the waypoint server then forwards this request to the Google Map server for a navigational path. Upon receiving the planned navigation path from the Google Map server, the waypoint server then packs the received navigation path into a waypoint list and forwards this list to the smartphone. Two points arise when the smartphone requests a new navigation path from the waypoint server: when the navigation is initiated, and when the way-point-aware service detects that users are lost during the navigation. The subsequent demo video of the resulting system can be accessed at http://www.youtube.com/watch?v=UtDNA7qH2FQ (Pedestrian mode) and http://www.youtube.com/watch?v=NIQBaTEHLGE (Driving mode).

Initialization interface. According to Figure 7, the LVN prototype system provides two initialization interfaces. Users can either manually input the destination name through an input textbox (Figure 7(a)) or click a POI icon on the map (Figure 7(b)). Once a smartphone receives the waypoint list from the waypoint server, the corresponding navigation path is then sketched on the smartphone with the map mode. The LVN system is then initiated after the user clicks the navigation icon on a screen.

Navigational interface on the smartphone must be carefully designed to provide an intuitive human-centric navigation experience. Three types of navigation information are displayed on the interface (Figure 8), such as trail map, direction icon, and course arrow. The upper right map is the trail map, such as a miniature Google Map that records the moving path of the users. The trail map makes users aware of the relationship between their current position and their neighboring roadways. The direction icon is embedded on the upper-left corner so that users understand their orientation. It is important to note the course arrow on the bottom-half corner is the interactive directional sign that points to the next waypoint of the planned route. Notably, the course arrow is the implementation of the augmented navigating interface, that is, the most important mechanism that attempts to correlate the cognitive map with the navigating map.

Back to Top

Experiments and User Study

After the prototyping system is constructed, concerns over robustness, performance, and user experience are raised. Experiments were conducted with the HTC Hero smartphone (http://www.htc.com/www/product/hero/specification.html) to answer such concerns. HTC Hero is powered by Android 2.1 and equipped with various sensors, including a GPS receiver, a digital compass, and a G-sensor. Since GPS positioning accuracy is essential to the LVN system, the experiments begin with testing the robustness of the position-aware service. The evaluation of performance of the core engine (Figure 2) follows. Finally, user experience is surveyed to determine whether the user is more comfortable with the LVN system than the conventional GPS navigation system.


The LVN system is a mobile augmented reality system that attempts to improve the efficiency of GPS navigation.


Robustness of the position-aware service. Since the position-aware service is based on the PGPS algorithm, robustness of this service is the same as the accuracy of the PGPS algorithm. Based on the cumulative distribution function (CDF), an experiment was conducted to compare the cumulative GPS drift errors with and without the assistance of PGPS and the result is shown in Figure 9. The test site is a street in a university town with tower buildings on both sides. The scenario was deliberately selected to test the impact of environmental interference to the robustness of PGPS. Figure 9 indicates that, due to environmental interference, GPS drift error occurs at the 54th second and accumulates as time proceeds when the PGPS algorithm is not implemented. Conversely, the CDF curve for the GPS drift error is much smoother when the PGPS algorithm is implemented. During live-view navigation, the jittering of GPS data may cause the directional arrow to sway, possibly disturbing user judgment. Hence, experimental results indicate that PGPS can provide stable GPS data for the LVN system.

Performance evaluation of LVN system. According to Figure 2, the core engine of LVN system has position-aware, orientation-aware and way-point-aware services. In other words, the performance of these three services dominates the LVN system. This experiment thus attempts to estimate the core engine executing time starting from position-aware and orientation-aware until the augmented navigation service draws the directional arrow sign. The experiment is conducted by requesting the user walk on the road and log the core engine execution time of each GPS data. The process lasts for over 600 sets of consecutive GPS data. Figure 10 shows the resulting histogram with each set of GPS data on the horizontal axis and the core executing time as the vertical axis. According to Figure 10, the core engine executing time is always lower than 140ms. Since the GPS receiving rate is one second per complete set of MNEA sentences, experimental results indicate the executing speed of the LVN system is faster than the GPS receiving speed.

User study. The LVN system is a mobile augmented reality (MAR) system that attempts to improve the efficiency of GPS navigation. From the user’s perspective, the LVN system provides a new type of user interface for navigation. Hence, user’s confidence in the information from the LVN system and intuitive interaction with the LVN system become two indicators to validate its impact. Accordingly, user studies were conducted with 40 students ranging from 20 to 23 years old. Students were divided into two groups with 10 males and 10 females in each group. Group 1 was instructed to use the conventional GPS navigation system first and then the LVN system. Group 2 was instructed to use the LVN system first and then the conventional GPS navigation system. Each student was then instructed to score the following questions from 1 to 5, with 5 representing highest confidence.

Q1. When using the conventional GPS navigation system, how much confidence do you have in the information the system provides?

Q2. When using the LVN system, how much confidence do you have in the information the system provides?

Q3. Compared with the conventional GPS system, do you prefer the navigation pattern the LVN system provides? Score 5 implies that the LVN system is your preferred system and 1 suggests that you like conventional GPS navigation better.

Q4. As for the LVN system, are you comfortable with the design of the augmented navigation arrow? A score of 5 implies that you like it very much.

The survey results are tabularized in the accompanying table here, illustrating the mean value, standard deviation (SD), minimum and maximum values of Group 1 and 2 as well as the p-value of each question. For Q1, Group 1 showed more confidence in conventional GPS navigator than Group 2. The reason being Group 2 used the LVN system first, and became impatient and uncomfortable when they switched to the conventional GPS system. Based upon Goodman,6 since the p-value of Q1 is smaller than 0.05, the survey result of Q1 is conviction. On the other hand, although both groups had higher mean values on Q2 than Q1, the p-value of Q2 shows the result deserves further investigation.

Comparing the SD of Q1 and Q2 for both groups, Group 1 had a higher SD on Q2 while Group 2 had the opposite. Again, since Group 2 used the LVN system first, Group 2 was more appreciative of the intuitive navigation by the LVN system than Group 1.

The variation of mean values on Q1 and Q2 for Group 2 also encouraged this observation. Furthermore, although Group 1 had dispute on Q2, the p-value of Q3 showed that both groups were more favorable to the LVN system. Interestingly, the mean value and SD of Group 2 on Q2 and Q3 reveal that students are more favorable to the LVN system if they use it first. Finally, as portrayed by the mean value and p-value of Q4, both groups were valued on the new navigational interface by AR technology.

Back to Top

Conclusion

This article presented a novel approach to designing a new generation of a GPS navigating system, called a live-view navigating system (http://www.omniguider.com). The LVN system matches the human cognitive map via context-aware and augmented reality technologies. The LVN system adopts several hardware sensors for the position-aware service and orientation-aware service to derive precise position and orientation information. Additionally, when users are not on the planned route, the system can track the users’ moving intentions through way-point-aware service. Moreover, the augmented navigating interface by the AR approach provides more intuitive and convenient navigating information to the user. During prototype verification, the following topics were raised for future research to pursue:

  • Training time: Given the LVN system utilizes context-aware technology to probe users’ behaviors through learning, of priority concern is how to perceive the complex interaction behaviors between users and their surrounding environments. The context-aware method requires a learning model before user behavior can be perceived. Different models of the context-aware approach require various learning processes that affect learning time. Although this work has presented an approach regarding user’s behavior observation, whether other better solutions are available remains unclear. Behavior learning significantly impacts the accuracy of inference of the sensor. With more observation samples, it naturally fits better behavior learning outcomes. Therefore, the training time must be shortened to ensure the proposed system is more useful.
  • Cognitive accuracy: How can an augmented navigation interface accurately overlay the directional sign at an intersection of multi-crossroads? The LVN system uses the embrace angle between heading waypoint and next waypoint to calculate the directional arrow to point to the next route. However, according to the live test, the proposed approach is not sufficiently accurate when the intersection of multi-crossroads has the next routes close to each other. Hence, further investigation is warranted for this issue to provide more accurate intuitive navigation information.
  • Adaptive arrow drawing: Since the LVN system captures live video during the navigation, the color, and brightness of the captured images are dynamically changed with the surrounding environment. The color of the AR directional arrow occasionally has the same color of the image, which makes the AR arrow indistinguishable from the background image. This situation worsens in a bright daylight environment. Hence, how to render an arrow that dynamically adapts to background color and brightness is also worth studying.

We have presented a novel approach to tightly merge the carrier’s behavior into the GPS navigation process. The proposed approach can provide users with comprehensive and precise navigation data through the sensing equipment embedded in the smartphone without any other auxiliary hardware utilities. In the near future, to provide users with an enhanced navigational experience, future works should further elucidate human behavior to address these issues more thoroughly.

Finally, due to precise positioning and user habitual behavior capturing, the LVN system is useful for outdoor shopping areas to provide precise advertisements upon users’ facing a store. Furthermore, LVN systems can be used in theme parks to make visitors aware of each facility. In all, any applications that require precise position and orientation to provide proper services promptly will find the LVN system of value.

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Navigation system continuum.

F2 Figure 2. Live-view GPS navigation (LVN) system architecture.

F3 Figure 3. Position-aware service diagram.

F4 Figure 4. Orientation-aware service diagram.

F5 Figure 5. States transition when the user is not following the planned route. (a) Traditional GPS navigation approach; (b) Waypoint-aware method.

F6 Figure 6. The prototype of the live-view GPS navigation system.

F7 Figure 7. User interface for initialization.

F8 Figure 8. User interface during navigation.

F9 Figure 9. CDF with and without PGPS.

F10 Figure 10. Histogram of executing time.

Back to Top

Tables

UT1 Table. The statistic survey results of Q1, Q2, Q3, Q4 in Group 1 and Group 2.

Back to top

    1. Chen, C. Bridging the gap: The use of Pathfinder networks in visual navigation. J. Visual Languages and Computing 9, (1998), 267–286.

    2. Crampton, J. The cognitive processes of being lost. The Scientific J. Orienteering 4, (1998), 34–46.

    3. Daniel, W., Mulloni, A., Reitmayr, G. and Drummond, T. Pose tracking from natural feature on mobile phones. IEEE and ACM International Symposium on Mixed and Augmented Reality, (Sept. 2008), 15–18.

    4. Eaton, G. Wayfinding in the library: Book searches and route uncertainty. RQ 30, (1992), 519–527.

    5. Garmin. Garmin Features, Feb. 2010; http://www8.garmin.com/automotive/features/.

    6. Goodman, S.N. Toward evidence-based medical statistics. 1: The P value fallacy. Annals of Internal Medicine 130, (1999), 995–1004.

    7. Hart, R.A. and Moore G.T. The development of spatial cognition: A review. In Environmental Psychology, People and their Physical Settings. H.M. Proshansky W.H. Ittelson, and L.G. Rivlin, Eds. Holt Rinehart and Winston Press, 1976, 258–281.

    8. Huang, J.Y. and Tsai, C.H. Improve GPS positioning accuracy with context awareness. In Proceedings of the First IEEE International Conference on Ubi-media Computing. (Lanzhou University, China, July 2008), 94–99.

    9. Klein, G. and Murray, D. Parallel tracking and mapping on a camera phone. In Proceedings of International Symposium on Mixed and Augmented Reality (Oct. 2009), 83–86.

    10. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Intern. J. of Computer Vision 60 (2004), 91–110.

    11. Nakayama, K. and Loomis, J.M. Optical velocity patterns-velocity-sensitive neurons and space perception: a hypothesis. Perception 3, (1974), 63–80.

    12. Ozuysal, M., Fua, P. and Lepetit, V. Fast keypoint recognition in ten lines of code. In Proceedings of CVPR, (June 2007), 1–8.

    13. Tolman, E.C. Cognitive maps in rats and men. Psychological Review 55, (July 1948), 189–208.

    14. You, S., Neumann, U. and Azuma, R. Hybrid inertial and vision tracking for augmented reality registration. In Proceedings of IEEE Virtual Reality (Mar. 1999), 260–267.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More