An Algorithmic Approach to Adapting Edge-based Devices for Autonomous Robotic Navigation

A recent signiﬁcant progress has been made in development of intelligent mobile robots that is capable of autonomous navigation using an edge-computing system. This could sense changes in its environment to control its mechanical behavior towards accomplishing preprogrammed motions. Several algorithms were used in developing the robot’s control software. These include the moving average ﬁlter, the extended Kalman ﬁlter, and the covariance algorithm. Using these algorithms, the robot could learn from its sensors to estimate and control its position, velocity, and the proximity of obstacles along its path, while autonomously navigating to a predetermined location on the earth’s surface. Results show that our algorithmic approach to developing software systems for autonomous robots using edge-computing devices is viable, cost-e ﬃ cient, and robust. Hence, our work is a proof of concept for the further development of edge-based intelligence and autonomous robots.


Introduction
A popular approach to developing an autonomous robot is to adopt Central Processing Unit (CPU) based computers for running the robot's control software [1]. The NVidia hybrid GPU-CPU computer is a popular platform for developing such autonomous systems, which comes with great development cost [2]. Evidently, the size of computing power that is required to make a robot become intelligent depends on the type and complexity of the intelligent algorithm that needs to be computed [3]. Considering the recent advances in microprocessor technology, today's roboticists now have more robust and versatile system-on-chip (SoC) computers and microcontroller platforms at their disposal. Unlike the types of microcontrollers that were available few decades ago, today's microcontrollers and single-board computers now compete with the orthodox central processing unit (CPU) based platforms in terms of speed, processing power, memory size, input/output interfaces, and programmability for computing simple artificial intelligence (AI) algorithms and autonomous functions. These have made it possible to develop various AI-based embedded software that run on SoC-computers and microcontrollers, by an approach that is popularly and more recently dubbed "edge-computing".
In this paper, we propose an algorithmic approach to implementing edge-based intelligent motion control scheme for our quadrupedal-wheeled robot in [4]. The hardware components of the robot's control system include a system-on-chip computer and a microcontroller. The robot's control software comprises both parallel and object-oriented algorithms, which constitute its intelligence schema. We also introduced a method for creating robotic systems that are intelligent and capable of autonomous behaviors using edge-computing devices. This enabled us to explore the possibilities in achieving microprocessor-grade computations with edge-based devices. Our aim is to show how effectively an edge-computing device could be adapted for point-to-point autonomous navigation of a mobile robot. This involves physical improvements on the robotic system developed by us in [4], development of the controller, and formulation of relevant parallel computation and control algorithms using objectoriented (OOP) programming techniques.
We would start with a review of relevant literature in Section 2. Our experimental platform would be detailed in Section 3 with the explanation of the newly proposed intelligent schema in Section 4. We would discuss the states estimation and motion control of the robot in Section 5. We conclude with the summary of the result of our field experiments in Section 6 while discussing its possible applications.

Literature Review
The microcontroller is a complete single-chip computer system that is optimized for the primary function of control. Basically, the microcontroller comprises a microprocessor, a Read-Only Memory (ROM), a Random-Access Memory (RAM), several Input/output (I/O) interfaces, and one or more serial ports. Modern microcontrollers are enhanced with higher processor's speed, Radio/Wi-Fi capability, sufficient memory, integrated Analog-Digital Converter (ADC), and a boot-loader (i.e., an embedded operating system) to facilitate OOP. These novel features have opened the possibility of implementing AI and Internet of Things (IoT) functions with the microcontroller [5].
Besides high development cost, the adoption of CPU-based computers for autonomous robot could lead to over computerization, which in turn could result to a huge and clumsy robot that takes so much time and energy to perform simple intelligent tasks. This is evident in a study by Stewart et al., which discusses the need to implement AI right inside the microchip [6]. According to them, "it makes no sense to use the CPU to put just a bit of intelligence into a thermostat". Thus, the future of AI will see a major paradigm shift, from the traditional method of cloud-and CPU-based AI computation to localized computations in the microcontroller, which is referred to as edge intelligence or edge-computing [6]. To avoid the computational redundancy and the economic constraints associated with CPU-based AIcomputation, some roboticists now adopt the concept of edge-computing. For instance, Mamdoohi et al. adopted the PIC32MX Microcontroller to demonstrate how a microcontroller could be used to implement a genetic algorithm for polarization control [7].
Their experiment showed real time computation of the complex algorithm with an average latency of 17 microseconds, which according to them, is low enough for their application. Also, Hussain et al. developed autonomous robot using the ATMEL AT89C52 Microcontroller for logistical navigation, based on their hypothesis that high-level algorithms can be encoded into microcontroller for simple AIbased tasks [8].
The concept of edge-computing is also applicable to domestic service robotics as a home automation system. This has inspired the development of microcontroller based robots with sufficient intelligence to perform simple household chores. For example, Mir-Nasiri et al. developed a pneumatically actuated wall-climbing robot using the PIC16F877A Microcontroller [9]. This could perform glass cleaning tasks, while it is navigating autonomously along the exterior walls of high-rise buildings, under the guidance of four proximity sensors and an optical odometer. Similarly, Apoorva et al. used an Atmega328 Microcontroller board to develop an autonomous robot with a lowlevel intelligence for tracking, picking, and disposing garbage. Thus, citing how this simple AI could relieve humans from the monotonous and hazardous job of waste collection [10].
For autonomous navigation, the SoC-computer and the microcontroller have proven to be effective edge computing devices. An exciting demonstration of this, is the work of Efaz, which involves the design of a speed-controlled path-finding obstacle avoidance robot, using OOP techniques [11]. This shows the practicality of edge-based physical computing using object-oriented algorithms. The Kalman filter is a simple AI algorithm that takes input data from multiple sensors and estimates unknown variables, amidst potentially high level of signal noise; making it a very significant tool for autonomous navigation. Because of its simplicity, many engineers use it to develop edge-based guidance systems for autonomous robots. For example, Vukelic et al. implemented the Extended Kalman Filter (EKF) algorithm for fusing data from an inertial sensor and a Global Positioning System (GPS) Sensor using the mbed-LPC1768 Microcontroller, for the autonomous robot navigation [12]. Their results showed no difference between the practical implementation of the EKF on the microcontroller and that of the system simulation.
The second class of edge-computing devices that competes well with many desktop machines in terms of computing power, graphics processing and versatility are the aforementioned SoC-computers. This is attributed to the continual shift from single-core to multi-core processors in embedded systems and edge computing, coupled with the availability of efficient parallel programming technologies [13]. A typical example is the Linux-based Raspberry Pi-2, which becomes very crucial when two or more algorithms are to be executed in parallel. With SoC platforms, robot software developer can now create complex edge intelligent systems that were only possible with conventional desktop computers [14].

Experimental Platform
To test our computational hypothesis, we adopted the robotic system that we developed in [4], as our experimental platform. The Fig. 1 shows that the robotic system features an active suspension system, skid steering mechanism, several sensors, and an edge-based control system.
The 3D model of our robot's mechanical system is shown Fig. 2. The details of the underlying mechanics are given in [4]. Hence, the rest of this paper will focus on the formulation of parallel/objected-oriented mathematical algorithms that can be executed by mean of edge-computing to estimate and control the navigational status of our robot as an intelligent system.  The computing and control architecture of our robotic system incorporates the Raspberry PI-2 SoC-computer and an Atmega2560 Microcontroller (MCU) board as shown in Fig. 3. The Raspberry PI-2 SoC-computer serves as the "companion computer" that executes complex calculations in parallel to support the control functions of the MCU board (i.e., the main controller), which performs all the low-level computing and control functions. The algorithms that are implemented on the companion computer include the proximity data fusion algorithm (i.e., Algorithm 2), covariance algorithm (i.e., Algorithm 3), and the localization algorithm (i.e., Algorithm 5). A companion computer is necessary because these algorithms require high computing power and also the scheduling function of an operating system to run in parallel. For this purpose, the open message passing (Open-MP) application program interface (API) is used based on the concepts in [13]; and takes advantage of the multi-core ARM processor in the Raspberry Pi-2 (i.e., the companion computer of our robot). This performs the functions of • collecting measurement data from the navigational sensors on-board the robot • estimating the states of the robot • transmitting the resulting information to the main controller.
The types and functions of these sensors are cataloged in Table 1. Among these sensors are the proximity sensors of our robotic system, which comprises an ultrasonic sensor and an Infrared (IR) distance sensor that are mounted on the frontal projection of the robot's chassis through a servo-controlled revolver (with a yaw rotation span of 0 • to 180 • ). At the lower-level, the algorithm that are implemented on the main controller include the intelligence scheming algorithm (i.e., Algorithm 1), obstacle avoidance algorithm (i.e., Algorithm 4), path-tracking algorithm (i.e., Algorithm 6), and the maneuvering algorithm (as described in Subsubsection 5.2). The main controller is enhanced with an L293D-IC based motor driver, which enables it to regulate the flow of electrical power to the driving motors, during motion control.
Again, using the Universal Asynchronous Reception and Transmission (UART) protocol, two serial communication channels are established between the companion computer and the main controller, to enable real-time transfer of information, control signal, and computational request between the two devices. Both the companion computer and the main controller features additional I/O ports for the integration of more sensors and actuators as external peripherals, when necessary or during field tests. The entire hardware system is powered through a DC-DC bulk converter, which is used to convert the 12 volts DC supply from the robot's battery to the current/voltage requirement of the companion computer, main controller, and their peripherals.

Intelligence Schema
The intelligence schema (i.e., Intel_schema function in Algorithm 1) of our robot's control flow involves three basic functions for the direct control of the robot's perceptual responses and motion. These objective functions are enumerated as follows: 1. Change_path (in Algorithm 4).

Move_fwd (in Listing 2).
These above listed functions call upon one another and other subordinating functions (that are discussed in Section 5) to make the robot act as an intelligent agent. Algorithm 1 starts up the robotic system once its power switch is turned on. It coordinates Algorithm 4 and 6, which are the actual autonomous control functions of the robot, of which precision dependent on the accuracy of two other subordinate functions, which are 1. Prox_estimate (in Algorithm 2), and 2. Position_estimate (in Algorithm 5).
The Algorithm 2 is a data fusion algorithm. The input to this algorithm are proximity measurements from the ultrasonic sensor and IR distance sensor that are embedded on the robot. This algorithm fuses the duo proximity data into a single distance estimate, dist F , to minimize measurement error and noise. The estimated value of dist F is used in Algorithm 1 to decide what action the robot should take (i.e., obstacle avoidance or auto-navigation), while it is moving to the target location. The value of dist F is also used to regulate the driving speed (V ) of the robot. Here, V is a scalar ----Proximity controlled motion---- 5: set Lef t_motor_speed as (dist F cm/s) 6: set Right_motor_speed as (dist F cm/s) 7: continue To keep moving if d ≤ 40cm then 10: call Change_path Avoid obstacles 11: else if 40cm < d ≤ 120cm then 12: call Move_fwd Move forward 13: else if d ≥ 120cm then 14: call Auto_navigate Move to target 15: end if 16:

States Estimation and Motion Control
Autonomous navigation involves the solution to the problem of finding a collision-free motion between an initial and a target location in space and time [15]. Therefore, mathematical algorithms are formulated in this section for accurate estimation of an obstacle's proximity from the robot and the position of the robot in the geographical coordinate system. These are used to perform obstacle avoidance and path tracking motion-control functions respectively. The former is discussed in Subsection 5.1 while the latter is discussed in Subsection 5.2. We ensured that the adopted mathematics and models are as simple as possible so that the resulting algorithms can be implemented using the companion computer and the main controller. In this regard, only a non-holonomically constrained 2-D model of our robot is used.

Proximity sensing and obstacle avoidance
To enhance the control of our robot's motion during obstacle avoidance, a technique was developed for detecting the proximity of an obstacle from it as shown in Fig. 4. This technique involves the use of the ultrasonic sensor and the infrared distance (IR) sensor to simultaneously measure how distant the obstacle is from the robot. This is to minimize the error associated with each of these two sensors, while also harnessing their peculiar advantages. For instance, unlike the infrared sensor, the ultrasonic sensor can scan a wider volume of space and detect transparent barrier, but has some limitation when it comes to detecting hot materials. In contrast, the infrared sensor is more accurate since its beam is less conical than the ultrasound wave. To economize computing resources, we formulated a data fusion algorithm that combines the Moving Average Filter (MAF) and the covariance formula to fuse the incoming data from the two sensors and to also filter-off the noise in the signals, thereby minimizing error in proximity measurement. Based on [16], the mathematical derivation of the MAF is shown as follows,d whered k in Eq. (1) is the average from (k − n + 1) th to k th measurement values, while n is the total number of values. Hence, the moving average of the previous measurement is given in Eq. (2) as, Eq. (3) is the MAF formula in the form of a recursive function. The application of Eq. (3) is described in Algorithm 2, which contains the Move_ave (dist) 5 EAI Endorsed Transactions on Context-aware Systems and Applications Vol. 8 (2022) function, where 'dist' is the parameter for fetching raw proximity input-data from either sensors. This function could be called upon at real-time to consecutively calculate the moving averages of the streams of measurement data from each of these sensors. Hence, two moving average proximity values are computed at every instant -one for the ultrasonic sensor's measurements and the other for that of the infrared sensor. Prior to fusing these two moving averages, we derive a covariance (Cov) formula which is applied to ensure that the raw measurements from the two sensors are consistent with each other -both sensors are ranging the same obstacle. The Cov is given by where L 1 ⇐ dist 1 denote proximity measurements from the ultrasonic sensor, and L 2 ⇐ dist 2 denote proximity measurements from the infrared sensor. The index is the sampling integer (where index = 1, 2, ..., n), and n is the total number of measurement samples.
At any instant, if the value of Cov in Eq. (4) is positive, the mean value of the two moving averages is calculated as dist F and returned as the measurement estimate of an object's proximity from the robot. But if the value of Cov is negative, Algorithm 2 is recalled, to repeat the data fusion process.The algorithm for the application of Eq. (4) is given in Algorithm 3. With this scheme, we significantly reduced error in measurements by the robot's proximity sensors to a level that is acceptable and applicable for the obstacle avoidance motion control of our robot. The Algorithm for obstacle avoidance is given in Algorithm 4. This involves several calls to various maneuvering functions (as described in Subsubsection 5.2), in an effort to find the most obstacle-free direction, before returning control to the Intel_schema in Algorithm 1. Therefore, our robot could reliably and intelligently avoid both static and moving obstacles along its navigational pathway to a given target location.

Mechanics, Localization and Path-tracking
Here, a model is developed to describe how the robot would navigate, from an initial point, P 1 (θ 1 , φ 1 ), to the target point, P 2 (θ 2 , φ 2 ), via the shortest path possible between the points as shown in Fig. 5. This involves the knowledge of the the robots dynamics, position, and the application of control. Fig. 5 describes our robot as a skid-steering robot where ϕ is the instantaneous bearing of the robot at position, P 1 (θ 1 , φ 1 ), and ψ is its bearing from the targeted location, P 2 (θ 2 , φ 2 ), with respect to the magnetic North. The angle ϕ is the  For unregulated mobility, the yaw rate (i.e., sideways angular velocity) of the robot, ω, and linear velocity, V are calculated by the difference between the torques of the left and right wheels (i.e., τ L and τ R ), which directly influence the speeds (V L and V R ) of the wheels. The term 2γ is the kinematic width of the robot, while β is the radius of each of the wheels. Based on rotational mechanics, V and ω are expressed in Eqs. (5) and (6) as, Following [17], the dynamics of motion of the robot is expressed in Eq. (7) as, 6 EAI Endorsed Transactions on Context-aware Systems and Applications Vol. 8 (2022) An Algorithmic Approach to Adapting Edge-based Devices for Autonomous Robotic Navigation Algorithm 2 Proximity-data fusion algorithm (Note: dist 1 and dist 2 are incoming data from the ultrasonic and infrared sensors respectively) -------Main_Function---------3: function Prox_estimate 4: In Algorithm 3 7: if Cov V ALU E > 0 then whereω is the angular acceleration in rad.s −2 and I w is the moment of inertial in Kg.m 2 For regulated navigation, u = τ R τ L T is the input vector to the robot drive system, whose function is to implement motion control, by differentially until k ≥ n 12: while index < n do 13: 16: index ← index + 1 17: cov ← sum ÷ (n − 1) 18: end while 19: if index = n then 20: return cov 21: end if 22: end function driving either pair of the robot's wheels according to Eq. 7. This input vector is electronically generated by the main controller, during which instantaneous value of τ R and τ L are determined, based on the robot's states and perception of its environment. Therefore, the value of τ R and τ L influence the mobility and direction of motion of the robot. This is measured by the embedded inertial sensor as u * = V ω T . The application and direct control of these τ R and τ L , at the electromechanical level, are extensively discussed in Subsubsection 5.2.
According to the work of [18], the lateral, l, of the robot with respect to the desired path is related to ω by Eqs. (8) and (9), where ϑ and ε are the steering angle and the course angle, with respect to the desired path.
Again, the time-discrete state space model of our robot is given in  if dist Right > dist Lef where; and While Eq. (12) is the state matrix, Eqs. (13) and Eq. (14) are the input and output matrices respectively. Also, the variables τ and T s are the time delay and sampling time respectively. Either l or ϕ can serve as the controlled variable , with respect to ε and ψ, respectively. Unlike, [18], we selected ϕ as the controlled variable in order to minimize computational load. The model in Eq. (10) only serve to provide real-time estimates of the robot's location on the earth surface, which is a prerequisite for path tracking. Thus, our robot could be controlled towards the target location as a quasi-closed feedback system.
In order to ensure accurate positional mappings of the robot at all instants, we derive a position vector (x) equation given by The position vector, x, is mapped to the instantaneous location of the robot as expressed in Eq. (15). By applying the methods in [19] and [20], we performed the transformation between the Cartesian and geographical coordinate system as this is crucial for the real-time visualization of our robot's navigational routes.
Again, the input variable, V in vector u * [k], is approximately equal to the integration of the instantaneous acceleration, a inst , of the robot in the forward direction (i.e., v ≈ (a inst )dt ), according to the inertial sensor's measurements. The computational syntax for this is expressed in Listing 1. Alternatively, we could directly measured the linear speed, V , of the robot, by attaching an optical speed sensor to one of the robot's wheels. Using the GPS sensor, the robot could directly measure its real-time position as z[k] = [x(k), y(k)]. Therefore, the GPS sensor's model is expressed in Eq. (16) as, In the present paper, two algorithms are adopted for the control of point-to-point navigation, based on the work of [21]. These include the localization and the path tracking algorithms in Subsubsections (5.2) and (5.2) respectively.
Localization: position estimation function. For real time estimation of the robots motion on the surface of the earth, the EKF is applied. This is used to formulate a localization algorithm that starts by predicting and then, estimating (i.e., updating) the position of the robot. This function is outlined in Algorithm 5.

Algorithm 5 Localization algorithm
Require: x and Algorithm 5 describes a recursive function that would run forever to continuously update the information about the position of the robot once its controller is electrically powered. Path tracking: auto-navigation function. Having developed a means (i.e., Algorithm 5) to reliably estimate the position of the robot on the surface of the earth, we can now formulate an algorithm that computes the displacement of the target location, P 2 , from the measured initial location, P 1 , of the robot in the geographic coordinate system; and also, to maneuver the robot to the target location, as shown in Fig. 6. To do this, we implemented the "Haversine formula" according to the review by [22], for computing the great-circle distance (i.e., the shortest path), , on the earth's surface between P 2 and P 1 , from their longitudes and latitudes, while ignoring the presence of opportunistic obstacles along the path. This formula is described in Eq. (19) and Eq. (20) as, H(Θ) = sin 2 ∆θ 2 + cos (θ 1 ) · cos (θ 2 ) · sin 2 ∆φ 2 , where Θ = r , ∆θ = θ 2 − θ 1 , and ∆φ = φ 2 − φ 1 .
In our path tracking and auto-navigation algorithm, the required destination coordinates, P 2 = θ 2 φ 2 T , is requested from the user as an input value during the start-up sequence of the robotic system; while the updated value of initial position, P 1 = θ 1 φ 1 ϕ T , is recursively fetched from Algorithm 5, such that P 1 ⇐ x[k + 1]. The goals of the path tracking algorithm are to: 1. plot the shortest path between P 1 to P 2 , 2. calculate the bearing, ψ, of P 2 from P 1 , 3. orient the robot's motion along the bearing, ψ and 4. cause the robot to move in this direction until P 1 ≈ P 2 .
In pseudocodes, this task is more sufficiently described as Algorithm 6. The boundary conditions for the application of Algorithm 6 are enumerated in Definition 5.2.
Algorithm 6 is valid under the following conditions: 1. The earth is assumed to be a perfect sphere.
2. The range of navigation (i.e., ) is limited to 500 m.
3. The robot can not reverse its motion (i.e., it can only yaw and drive forward), except Algorithm 4 is called to enable the robot avoid an obstacle.
4. Proximity of obstacles ahead must be greater than 120 cm.

Test navigation is performed in a controlled environment.
Maneuvering: motor control functions. For efficient maneuverability during obstacle avoidance or autonavigation of our robot, the main controller should be able to control the flow of the required electric power to the robot's drive motors. To achieve this, we introduced the l293d-ic based motor driver, as an electro-software and mechanical interface, between the main controller and the four drive motors as shown in Figs. 7 and 8. This configuration involves four geared Direct Current (DC) motor (i.e., M 1 , ... , M 4 ) that produces mechanical torques (i.e., τ L and τ R ) in order to propel the robot over a given terrain. This process is activated by a maneuvering algorithm (i.e., Algorithm 7), which is also encoded into the main controller.

Results and Discussion
The navigational schemes and algorithms in Sections 4 and 5 are implemented using the experimental platform in Section 3. The Algorithms 2, 3, and 5 are encoded in Python programming language for execution on the companion computer as objectedoriented parallel programs. For low-level control, the We conducted experiments in an open field as shown in Fig. 9 to test the validity of the mathematical models upon which the algorithms are based, and to evaluate the performance of the robot in the physical world. These involve the telemetry of navigational data to the remote data acquisition computer for realtime analysis. Our experimental procedure, analytical techniques, and the results are discussed below in Subsections 6.1 and 6.2.

Experimental procedure and results
As the navigational precision of our robotic system depends on the accuracy of its sensors, our field tests aim at evaluating the accuracy with which our robot could perform both obstacle avoidance and autonomous navigation, while maneuvering towards a target location.

Evaluation of proximity measurement technique.
To ensure precision in obstacle avoidance, we evaluate the accuracy of each of the two adopted proximity sensors and the Algorithm 2. To do this, we plot the real-time values from the the ultrasonic sensor (i.e., dist U ltrasonic ), infrared sensor (i.e., dist Inf rared ), and their fusion (i.e., 11 EAI Endorsed Transactions on Context-aware Systems and Applications Vol. 8 (2022) Rotate_ccw Robot yaws counter-clockwise direction 5 Brake Robot stops moving Slow_fwd 100 100 100 100 1 1 1 1 Figure 9. Performance evaluations in field tests dist F from Algorithm 2) against time (in seconds). Also, we plot the estimate 'dist F ' and the true proximity of the obstacle from the robot (i.e., dist Actual , as measured with a meter-rule), against time (in seconds). The result of this experiment is presented in Fig. (10 and 11). This visualizes the error present in the robot's sensitivity to the proximity of obstacles along its path.

Graphical visualization of the robot's routes.
To evaluate how accurately our robot could autonomously move to a target location (based on Definition 5.2), we visualized the actual navigational routes of our robot in comparison to the desired path, using real-time position estimates from Algorithm 5. For experimentation, variations in the navigational environment included the number of pre-stationed obstacles along the robot's path of travel and also, the length of the path. The results of this experiment are presented in Figs. 12 to 14.

Discussion
The plots in Fig. 10 shows how effectively Algorithm 2 fuses the measurement data from the ultrasonic and infrared sensor as dist F . Based on Fig. 11, we observe that the fusion product (dist F ) is consist with the proximity values (dist Actual ), thus increasing the accuracy in the measurement of an obstacle's proximity from the robot, unlike the direct measurements from the individual sensors (i.e., dist U ltrasonic and dist Inf rared ), which individually contains higher level of noise.  Figure 11. Proximity values (dist F and dist Actual ) vs time According to our observations in Figs. 12 to 14, Algorithm 6 is effective, within the bounds of Definition 5.2, at homing the robot to a position near the target location (Note: the blue dots in the plot for the followed-path indicates the turning points, that is, the points at which the robot changes its direction of motion). In particular, the plot in Fig. 12 shows that in the absence of obstacles, the robot will move along a near-straight line from its starting point to the target location, which is evident in the relatively less number of turning points. Comparatively, the plots in Figs. 13 and 14 show that the motion of the robot to a target Limitations. Based on our field tests observations, the technical possibility of flaws in the sensors measurements limit the performance of our robot. Other limitations arises from the topography of the terrains and the mechanical constraints of the robot's drive system. We observe that unlike on paved paths, the robot experiences transitional difficulties and 13 EAI Endorsed Transactions on Context-aware Systems and Applications Vol. 8 (2022) inadequate steering power when maneuvering over rough terrains and grasslands; which is as a result of increased friction between the wheels and the ground. Apart from these constraints, the overall performance of our robot is satisfactory and meets the design objectives.
Applications. Potential applications of our autonomous robotic system include: 1. Autonomous seed-planting: Similar to the work of [18], the construct of our robotic system as well as the underlying auto-navigation algorithm could be applied in the development of an autonomous seed-planter for enhanced precision and efficiency in crop farming.
2. Environmental monitoring: Similar to the work of Olakanmi et al. in [23], our robotic system could be re-purposed as a multi-sensor surveillance system for environmental monitoring, especially in hazardous and industrial environments.
3. Office-file movement: Our robotic system could be developed into a semi-autonomous door-todoor file mover along passageways in a large office building.
4. Home delivery: Our robotic system could find application in logistics as a logistical robot that uses geospatial data and local beacon signals to autonomously navigate along streets and deliver merchandises to homes.

Conclusion
In this work, an edge-based autonomous robotic system is developed for point-to-point navigation using geospatial data. This system is able to avoid obstacles on its path to a target location using the fusion of proximity measurements of obstacles from itself, as detected by both the ultrasonic and infrared proximity sensor. In operation, several algorithms are used. These included the proximity sensor fusion algorithm for obstacle avoidance motion control and the localization algorithm for autonomous navigation of the robot to a target location. The embedded hardware of the robot comprises two edge-devices -the main controller and the companion computer, which work together as complimentary systems to implement the robot's control algorithms. Several field tests were also conducted to evaluate the performance of the robot. These included the evaluation of how accurately the robot could detect obstacles along its travel path and maneuver them and also, how precisely it could get to its target location. Results show that our robot performs as expected, regardless of its operational constraints. In essence, our work proves that edge-devices like the microcontroller and SoC-computers are applicable to the development of intelligent and autonomous systems. Future developments in this area of research may explore the potential applications of our system; such as autonomous seed-planting, environmental monitoring, logistical automation etc., as mentioned in Section 6.2. Therefore, we hope that our work stimulates interest and enthusiasm, especially with regard to the practical applications of our robot and its corresponding algorithms.