Autonomous Vehicles Archives - Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design https://insidegnss.com/category/b-applications/autonomous-vehicles/ Global Navigation Satellite Systems Engineering, Policy, and Design Tue, 29 Aug 2023 13:52:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://insidegnss.com/wp-content/uploads/2017/12/site-icon.png Autonomous Vehicles Archives - Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design https://insidegnss.com/category/b-applications/autonomous-vehicles/ 32 32 Rivian Sensor Leader Discusses Importance of Better Localization Sensors for Level 3 https://insidegnss.com/rivian-sensor-leader-discusses-importance-of-better-localization-sensors-for-level-3/ Mon, 28 Aug 2023 13:44:20 +0000 https://insidegnss.com/?p=191735 The latest safety award for Rivian is an Insurance Institute for Highway Safety for the R1S as a Top Safety Pick+ for 2023....

The post Rivian Sensor Leader Discusses Importance of Better Localization Sensors for Level 3 appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>

The latest safety award for Rivian is an Insurance Institute for Highway Safety for the R1S as a Top Safety Pick+ for 2023. It accomplishes this and other safety advances with its Driver+ ADAS (advanced driving assistance system), which integrates a set of sensors along with driver attention, GNSS, and IMU modules. At AutoSens Detroit 2023, Abdullah Zaidi, Engineering Lead and Senior Manager at Rivian, presented his take on the state and near future of ADAS sensors including the benefits of adding lidar and the importance of better localization sensors for Level 3 systems.

“It is very important for a vehicle to know its location within the geometric space,” he said.

With GNSS modules, he added that proper design of the RF (radio frequency) hardware and antenna are key, with the industry increasingly turning to more RF modalities. Since GNSS signals can get jammed or interfered with on a vehicle, he believes that the RF hardware should be kept separate or designed in a way that it doesn’t see a lot of interference.

The best possible GNSS accuracy is needed.

“The way you get it is by having access to more constellations as well as more frequencies,” he said. “Today, the non-high-precision GNSS generally use [the] L1 band but moving forward there’s a trend that the GNSS suppliers are transitioning towards using all three of them.”

Having access to L1, L2, and L5 bands is key for ADAS systems.

“It’s important to have access to all those frequencies and make sure there are no issues or inaccuracies due to the multipath or one of the frequencies not being available all the time,” he said.

He believes that more work is needed on GNSS redundancy for Level 3.

“Today most of the GNSS receivers aren’t ASIL B-qualified,” said Zaidi. “Moving forward, you would want to know that your GNSS is failing so the other sensors can take over, and that’s where the ASIL B is important.”

For inertial measurement sensors, low bias instability is needed for optimal dead reckoning.

“Vehicles generally dead reckon when you’re going through the tunnel, and the IMU is the one that helps you fail safely,” he said. “So, you have to ensure that your IMUs have a higher accuracy and they don’t drift a lot in the yaw dimension.”

In this regard, he believes MEMS technology is being used effectively for IMUs, but that a promising trend is for companies to move to fiber optics or silicon photonics. Careful consideration must be given to the vehicle packaging location of the IMU for reducing temperature changes and vibrations.

“Otherwise, that impacts the performance of your vehicle staying in the lane,” Zaidi said.

Like with GNSS, redundancy with IMUs is key. “If one of those IMUs fails, then you don’t have a backup to fall to in the scenario where you are aiming for a fail-safe operation,” he concluded.

The post Rivian Sensor Leader Discusses Importance of Better Localization Sensors for Level 3 appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>
Anello Reveals GNSS INS System with ‘World-first’ Optical Gyro https://insidegnss.com/anello-reveals-gnss-ins-system-with-world-first-optical-gyro/ Wed, 02 Aug 2023 16:33:33 +0000 https://insidegnss.com/?p=191598 Santa Clara, CA-based startup Anello Photonics has announced a GNSS INS module that it says is the world’s smallest optical gyro inertial navigation...

The post Anello Reveals GNSS INS System with ‘World-first’ Optical Gyro appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>

Santa Clara, CA-based startup Anello Photonics has announced a GNSS INS module that it says is the world’s smallest optical gyro inertial navigation system for GPS-denied navigation and localization. It is powered by the company’s optical gyroscope technology and AI-based sensor fusion engine, the combination engineered to deliver high-accuracy positioning and orientation for applications in the agriculture, construction, robotics, and autonomous vehicle space.

“We are actively engaged with customers who need robust, high-precision optical gyro-based solutions for their autonomous applications,” said Dr. Mario Paniccia, CEO of Anello Photonics.

Anello was co-founded by Paniccia and CTO Mike Horton, pioneers in the field of silicon photonics, sensors, and navigation, with the early support of Catapult Ventures and high-volume fab Tower Semiconductor. Coming out of stealth at CES 2023, Anello displayed the low-noise and -drift SiPhOG™ sensor, which it says is the first silicon photonics optical gyroscope and the smallest optical gyroscope in the world.

“It was a very ambitious thing we took on—creating a fiber gyro on a chip,” Paniccia to Inside GNSS. “We’re measuring a very tiny signal and putting it all into a standard process that’s fabricated in a high-volume fab. To our knowledge, no one’s building or has anything working at this level, let alone full INS systems, with integrated photonics.”

According to Anello, the silicon photonics optical gyroscope technology can be board-mounted and is made with integrated photonics components so it can be processed in high volume just like other integrated circuits. Another key advantage is a low unaided heading drift of less than 0.5°/h.

“This is the sweet spot,” said Paniccia. The MEMS in mobile phones and AirPods—which range from 2.0°/h, with high temperatures pushing that to hundreds and potentially a thousand degrees per hour—are not accurate enough for the autonomy safety case, he added. Typical fiber gyros, “the gold standard” for accuracy, are too big, bulky, and expensive.

“The idea is to bring the performance of high-precision optical gyros from guided missiles and other high-end applications into a form factor and price point that you can put it into the volume market in the autonomous landscape,” he said.

For the new GNSS INS system, the company launched an evaluation kit about a year ago for customer trials.

“It’s the smallest in this case, [with] not only the smallest gyro that we’ve developed and announced but now we have the smallest inertial navigation system that can be put into real solutions and real applications,” said Paniccia.

The near-term markets for Anello’s technology are in construction and farming “where they can pay a little bit of a premium.” An upcoming robotics product uses basically the same core fundamental platform without GPS.

In the future, the company is working to deliver to the high-volume auto market, and that means not only ASIL-D specs but also immunity to temperature and vibration with lower power consumption.

“We’re trying to gear towards ADAS (L2 plus, L3) over time,” concluded Paniccia.

The post Anello Reveals GNSS INS System with ‘World-first’ Optical Gyro appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>
FocalPoint Software Helps Extend Automotive ADAS Into GPS-challenged Cities https://insidegnss.com/focalpoint-software-helps-extend-automotive-adas-into-gps-challenged-cities/ Thu, 20 Jul 2023 18:10:52 +0000 https://insidegnss.com/?p=191567 Earlier this year, FocalPoint Positioning announced a collaboration with General Motors on the possible application of the UK GPS software company’s Supercorrelation software...

The post FocalPoint Software Helps Extend Automotive ADAS Into GPS-challenged Cities appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>

Earlier this year, FocalPoint Positioning announced a collaboration with General Motors on the possible application of the UK GPS software company’s Supercorrelation software in future vehicles, including potential enhancements and operational expansion of its Super Cruise hands-free ADAS (advanced driving assistance system) and its upcoming higher-tech sibling Ultra Cruise.

Supercorrelation enables a new class of satellite positioning receivers that can measure the directions of the incoming signals, allowing them to ignore reflected and fake “spoofed” signals. Manuel Del Castillo, VP of Sales and Business Development at FocalPoint Positioning, explained to Inside GNSS that the company’s technology focuses on line-of-sight signals with a unique way of analyzing the Doppler frequency.

“FocalPoint’s solution can predict that doppler and make the antenna work as a synthetic, smart antenna,” he said. “You can do that for all the different angles-of-arrivals of the GPS signals that you’re expecting. That allows you to focus on what you should focus on, and you’re not listening to all the other reflections that come from different angles that are not legitimate.”

That means a GPS chip can compute a position in an urban scenario as if it was on a highway by mitigating the effect of reflections “by a large amount. If you have generally around 20 meters of error, you would get it reduced to around 3 or 2 meters of error,” said Del Castillo.

Much of FocalPoint’s work with General Motors is confidential, but Del Castillo provided some interesting context. He said that the automaker is looking to apply the Supercorrelation solution because it’s a more cost-effective approach.

“Others are suggesting solutions that involve a lot of different hardware pieces that are much more costly,” he said.

GPS is a crucial element used by the ADAS to allow driving handoffs between the human driver and the system. The automaker’s engineers want to use Supercorrelation to extend the use of their systems from highways to cities.

“Right now, they cannot do it because GPS is not reliable enough in cities due to reflections,” said Del Castillo. “We’re facilitating the extension of Super Cruise to urban areas.”

The FocalPoint cooperation with General Motors is generating interest from other OEMs.

“They see it’s a cost-effective way of extending ADAS to cities, and others are seeing the same story,” he said. “We’re in that process with many OEMs right now. It’s very promising, but the challenge for FocalPoint is speedy execution because the supply chain is pretty complex.”

That execution involves convincing not only the OEMs but also the Tier Ones and GPS chipmakers that Supercorrelation should be embedded. The chipmakers include key names like Qualcomm, STMicro, and U-blox. FocalPoint has announced a partnership with U-blox, while many others are evaluating the technology.

The fact that it’s a software solution helps the cause.

“Software is the name of the game right now in automotive,” Del Castillo said.

FocalPoint’s entry into automotive is linked to the success of high-definition maps, he said.

“With an error of about 20 meters in a city, it didn’t make sense to have a map with all the lanes in a street,” he said. “With our technology, we are at the point that we can locate the car in the correct lane.”

He credits the work of Google, Here Technologies, and TomTom, as well as General Motors and other OEMs in bringing in the “lane-by-lane” HD maps.

“We’re going in hand-by-hand with the HD map suppliers because there’s a lot of synergy in what we’re proposing,” he concluded.

The post FocalPoint Software Helps Extend Automotive ADAS Into GPS-challenged Cities appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>
Cost is King for Passenger Car Positioning https://insidegnss.com/cost-is-king-for-passenger-car-positioning/ Mon, 17 Jul 2023 16:59:49 +0000 https://insidegnss.com/?p=191499 For passenger car companies developing advanced driver assistance and automated driving systems, absolute positioning is critical to the overall solution. However, providing cost-effective...

The post Cost is King for Passenger Car Positioning appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>

For passenger car companies developing advanced driver assistance and automated driving systems, absolute positioning is critical to the overall solution. However, providing cost-effective systems is key for high-volume applications from price-conscious automotive OEMs.

With nearly four decades of experience, Trimble Inc. has been a leader in precise positioning systems for passenger cars. The company has provided its RTX technology for GNSS/GPS correction to General Motors for the Super Cruise hands-free highway driving system since 2018 and more recently for the hands-off freeway driving capabilities of Nissan’s ProPILOT Assist 2.0 on the 2023 Ariya electric SUV.

At Informa’s AutoTech event in Novi, MI, last month, we caught up with Marcus McCarthy, Director, of Autonomous Navigation Solutions, for Trimble’s on-road division, to discuss the latest trends and challenges for passenger-car positioning.

The company’s GNSS chips are used extensively in agriculture and construction applications for which the value of positioning is higher and tied into a business process. That is not the case for the passenger car industry, which tends to use comparatively low-end GNSS receivers from other suppliers with one or two frequencies and a limited number of channels—all for cost reasons, according to McCarthy.

For passenger cars, the company avoids hardware but provides positioning-engine software that makes the best use of the data stream to and from GNSS receiver chips. Its solution improves the average precision from roughly 3-10 m to better than 20 cm in open skies, which makes a big difference in the targeted car applications, he said. “The challenge for us over the last number of years has been to make those inexpensive receivers as accurate as possible” with software.

The company’s positioning engine does this by outputting position, time, and orientation data, which are fused with data from precision maps and sensors like cameras or lidars. If data from those sources “line up, within reason,” then the system is deemed safe and can be used for driving, he said. If they’re not, that’s a risky situation, and so the system disengages.

“For a lane that is 3 m wide, a tolerance within that 20 cm is generally going to be good enough,” he said, for Trimble’s systems used in SAE Level 2 and 3 automated driving systems. However, he says that higher levels of autonomy may need greater precision—in the 10-cm range.

“We already have R&D systems that are there,” McCarthy said. “We will get there with this inexpensive [GNSS receiver] technology.”

However, he says that error estimation is even more important than precision at higher autonomy levels.

“When we output a position, we also output an estimate of error with that position,” he explained. “That estimate of error, in my mind, is more important than the precision of the position because it tells the system whether it can use the position data or not. It’s kind of a black-and-white thing when it comes to saving lives.”

At what point is the data not usable? He put it into numbers.

“The target integrity risk of 10-7, which is one failure in 1244 years of continuous operation,” said McCarthy. “When we get to Level 4, that’s the standard [at which] we’re operating. I’m not going to trust my family in a vehicle if it’s not operating at that level.”

The post Cost is King for Passenger Car Positioning appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>
GM Ventures Invests in FocalPoint’s Anti-Spoofing Technology https://insidegnss.com/gm-ventures-invests-in-focalpoints-anti-spoofing-technology/ Tue, 10 Jan 2023 20:02:28 +0000 https://insidegnss.com/?p=190455 CAMBRIDGE, UK—FocalPoint, the UK software company which provides next generation positioning solutions for mobiles, wearables and vehicles, has announced a strategic investment from...

The post <strong>GM Ventures Invests in FocalPoint’s Anti-Spoofing Technology</strong> appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>

CAMBRIDGE, UK—FocalPoint, the UK software company which provides next generation positioning solutions for mobiles, wearables and vehicles, has announced a strategic investment from GM Ventures, the venture capital arm of General Motors, and a collaboration with GM to explore the application of next generation GPS technologies to the automotive market.

FocalPoint’s Supercorrelation technology addresses the critical issue of GPS inaccuracies from current receivers in cities, bringing navigation positioning better in line with modern customer and business demands. The collaboration with GM will focus on possible application of the technology in future vehicles, including potential enhancements and expansion of its Super Cruise advanced hands-free driving assistance technology and its upcoming Ultra Cruise advanced driver assistance system in the coming years.

“FocalPoint’s Supercorrelation technology can make navigation and positioning more precise, especially in dense urban environments, which we believe can have significant benefits for the ongoing growth and capability of ADAS and AV systems,” said Scott Pomerantz, FocalPoint CEO. “We’re delighted to expand our advanced technology into the automotive sector with GM, a leader in advanced driver assistance systems.”

“We’re constantly researching technologies to support and enhance the growth and performance of our vehicles, both within our internal research and development operations and through collaborations with innovators beyond our company,” said Kent Helfrich, President, GM Ventures. “This collaboration with FocalPoint targets a specific aspect of the ongoing expansion of Super Cruise and Ultra Cruise going forward.”

FocalPoint was founded to address the critical issue of GPS inaccuracies of current receivers and bring navigation and positioning better in line with the demands of businesses and individuals in the 21st century.

This latest investment follows the announcement in September of a Series C funding round of £23m, led by Molten Ventures and Gresham House Ventures. FocalPoint has also secured funding from the European Space Agency’s NAVISP program to develop a live demonstration and rapid prototyping system to accelerate FocalPoint’s activities in the automotive and mobile sector.

Having already licensed some of its technologies to u-blox, the leading European manufacturer of wireless communication and positioning technologies, for use in their GNSS receivers, this collaboration signifies the first steps into the booming automotive market.

GPS Inaccuracy and Spoofing

FocalPoint’s Supercorrelation technology enables a new class of satellite positioning receiver that can measure the directions of the incoming signals, allowing them to ignore reflected signals and fake “spoofed” signals, making them more accurate in cities and more resilient against spoofing attacks.

Spoofing is used by criminal networks, maritime pirates, and fraudsters to broadcast fake satellite signals and confuse the receiver. This is a critical threat to business and consumers, particularly as the criminals become more sophisticated and the cost of spoofing technology comes down.

FocalPoint’s unique technology can instantly detect fake signals as spoofers, ignore those signals, and pinpoint where in the physical environment the signal is coming from. It is the only consumer-grade product in the market capable of these unique performance characteristics.

“This is a major milestone for FocalPoint as we work with General Motors to explore the deployment of our technologies in EVs and AVs. Accurate positioning for autonomous cars and driver assistance platforms is an opportunity for the industry and we are proud to be providing a much-needed solution,” said Ramsey Faragher, FocalPoint Founder, President and CTO.

The post <strong>GM Ventures Invests in FocalPoint’s Anti-Spoofing Technology</strong> appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>
Kodiak Robotics relies on lightweight mapping for autonomous truck PNT https://insidegnss.com/kodiak-robotics-relies-on-lightweight-mapping-for-autonomous-truck-pnt/ Wed, 21 Sep 2022 11:00:13 +0000 https://insidegnss.com/?p=189696 Kodiak Robotics, Inc. has been on a bit of a roll lately, developing automated solutions for long-haul truck routes in the southern parts...

The post Kodiak Robotics relies on lightweight mapping for autonomous truck PNT appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>

Kodiak Robotics, Inc. has been on a bit of a roll lately, developing automated solutions for long-haul truck routes in the southern parts of the U.S.

The self-driving trucking company made a big announcement on that front in August, unveiling a partnership with Pilot Company, the largest travel center operator in North America, to develop autonomous truck services at Pilot and Flying J travel centers. 

The companies are creating an autonomous truck port in the Atlanta area to evaluate potential service offerings and explore scalable solutions. The possibilities include spaces to pick up and drop off autonomous trucking loads; conducting inspections; maintaining and refueling trucks; and the ability to transfer data for feature development and mapping. 

The partnership is significant for both Kodiak and the industry. It establishes players like Pilot and its travel centers as premier locations to facilitate the various services autonomous trucks will need when they’re in production and deployed commercially, Kodiak CEO Don Burnette said. In addition, the Pilot centers will be access points for transferring data.

The Pilot partnership is just the latest development in Kodiak’s accelerating growth phase in 2022, with significant expansion coming in its service footprint and partner network as well.

In July, the company announced a partnership with 10 Roads Express, a provider of time sensitive surface transportation for the U.S. Postal Service, expanding the company’s service to Florida. And earlier this year, Kodiak announced a new route between Dallas and Oklahoma City with CEVA Logistics and a route between Dallas and Atlanta with U.S. Xpress.

A Fourth-Generation Autonomous Truck

This commercial success is being driven in part by the leading-edge technology in Kodiak’s fourth-generation autonomous truck. The new generation is designed for improved autonomous system robustness, with greater fleet uptime, manufacturing and serviceability in mind—all of which are critical to scaling the technology quickly, safely and efficiently, according to the company.

“Complex and bulky systems that require an engineer to hand-build and hand-tune are expensive, unreliable and difficult to debug,” said Burnette, who co-founded the Mountain View, CA-based Kodiak Robotics with Paz Eshel. “We believe that reliability and scalability flow from simplicity, and the best hardware modifications should be barely visible. Our fourth-generation platform is designed for simple, scaled production, which means easy calibration, troubleshooting and maintenance for our partners.”

The truck features a modular and more discreet sensor suite in three locations—a slim-profile “center pod” on the front roof above the windshield and pods integrated into both sideview mirrors. This better-integrated sensor placement is said to vastly simplify sensor installation and maintenance while also increasing safety.

The autonomous driving system features Luminar’s Iris LiDAR, Hesai’s 360-degree scanning LiDARs for side- and rear-view detection, ZF’s Full Range Radar, and the Nvidia Drive platform for the AI brains.

The Kodiak Vision perception system considers every sensor—including LiDAR, camera and radar—as primary, according to the company. All three sensors are purpose-built to meet the needs of autonomous trucks, which must “see” long-range in a variety of weather conditions to safely operate at highway speeds.

The system fuses information from the sensors and considers the relative strengths and weaknesses of each type. It incorporates extra redundancies and cross-validates data, adding another layer of safety to the self-driving system.

The patent-pending mirror pods—which will start with one Hesai LiDAR, two long-range 4D radars and three cameras—don’t require specialized sensor calibration. Rather than replacing a sensor in need of maintenance, a mechanic can replace the mirror pod in minutes. This single point of integration will allow for maintenance and serviceability at scale.

To make sense of all the data, the trucks will feature Nvidia Drive Orin, once available, as the supercomputing platform. With more than 250 TOPS (trillion operations per second) of compute performance, the platform is architected for safety and addresses systematic safety standards such as ISO 26262 ASIL-D (Automotive Safety Integrity Level-D). In the interim, Kodiak will use the current-generation Nvidia Drive AGX Pegasus to process data from cameras.

The Importance of PNT

When it comes to positioning, navigation and timing (PNT), Burnette said it “is definitely an area where we feel like Kodiak is really innovating within the space.” 

Within a mapping and localization framework, he said “ultimately the robot needs to answer the question, ‘Where am I?’ And once it knows where it is, then it needs to ask the question, ‘How do I drive from here?’ And then you just repeat those questions.” 

Historically, most companies have implemented high-definition (HD) maps of the environment using vehicle sensors to identify fine details like road texture surfaces, paint markings, tree trunks, buildings, sides of buildings, etc, Burnette said. 

“You name it, they put it in the map, and then they use that map to position themselves very finely in the real world,” he said. “And then they use an IMU [inertial measurement unit] to interpolate between those position-based preferences.”

His company diverges from the industry in this respect.

“We do it differently,” he said. “We have a very sparse mapping solution that only includes the road network—the lane connectivity information.”

For instance, the Kodiak system and lightweight Sparse map keep track of the number of lanes and their relation to each other and know where the exits and 
cloverleafs are. 

“From there, we localize based on what our sensors see relative to the lane markings that are relevant for driving, much the same way [that] humans do,” he said.

Kodiak engineers use an IMU to interpolate truck location as it moves down the road.

“Our positioning is very high fidelity in a lateral sense, but it’s not as high fidelity in a longitudinal sense,” Burnette said. “It just doesn’t matter if we’re one foot farther or one foot back on the road. As long as we’re close enough to be within the vicinity, we can identify key markers to tell us where we need to take our exits and where to expect other vehicles to be.”

That’s where GPS comes in.

“We have a very loose reliance on GPS just to bootstrap the system when we’re just getting started to kind of tell us where we are initially,” he said, “but also then to pull us gradually along longitudinally to maintain that semi accuracy.” 

Kodiak_focuses_on_easy_SensorPod_maintenance.

Reliability is King

While performance and cost are important considerations for IMUs and GPS units—as well as the perception sensors—for autonomy, the key metric Kodiak developers are interested in is reliability.

“I think this is a bit of a surprise for most people, and it applies to all the sensors,” Burnette said. “At this stage, we’re not looking for improved performance. I think we have the performance we need from our sensors, compute and hardware that would be acceptable to launch this product safely. Cost is somewhat important, but what we really care about is reliability.”

The company has not announced the sources for its IMU and GPS unit, but wants suppliers capable of building solutions that can withstand the harsh environment trucks face day in and day out without breaking.

“If you can build a unit that will go hundreds of thousands of miles on the highway without breaking, that’s what we care about,” he said. “We care about reliability much more than any kind of fancy gizmo.”

The ability to withstand the typical shock/vibration is a top consideration, especially for IMUs, along with water ingress and temperature swings. 

“We drive in the heat of a Texas summer where it can get extremely hot, and it also must be able to work in the bitter cold,” he said. “So, temperature, shock and vibe, water, ingress, reliability—just general wear and tear—those are the types of things that we evaluate.”

Those evaluations continue as the company aims to integrate its self-driving software and hardware, the Kodiak Driver, into production customer trucks in early 2025. Kodiak Driver will operate self-driving fleets for a low per-mile subscription fee.

The post Kodiak Robotics relies on lightweight mapping for autonomous truck PNT appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>
Graphing a Way out of Multipath: Robust Navigation for Autonomous Vehicles and Robots https://insidegnss.com/graphing-a-way-out-of-multipath-robust-navigation-for-autonomous-vehicles-and-robots/ Wed, 06 Apr 2022 02:58:09 +0000 https://insidegnss.com/?p=188763 A factor-graph optimization-based GNSS positioning method uses GNSS pseudorange and Doppler observations to estimate position, velocity, and receiver clock biases. Added constraints on...

The post Graphing a Way out of Multipath: Robust Navigation for Autonomous Vehicles and Robots appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>

A factor-graph optimization-based GNSS positioning method uses GNSS pseudorange and Doppler observations to estimate position, velocity, and receiver clock biases. Added constraints on past and current graph nodes of the graph using time-difference observations of the GNSS carrier phase improve the accuracy, and a robust optimization method excludes multipath outliers. Experimental results reduce horizontal positioning error from 5 to 10 meters to 1.37 meters.

TARO SUZUKI

Chiba Institute of Technology, Japan

Autonomous cars and outdoor mobile robots must overcome multipath signals to achieve better than few-meters GNSS single point positioning accuracy, which is insufficient for autonomous navigation. Combining GNSS with other sensors and 3D mapping data is very complex and expensive, and there is a need for a method to improve positioning accuracy in urban environments using only GNSS. 

Methods using graph-based optimization have attracted attention recently, including research in robotics and computer vision [see References]. Compared with traditional filtering approaches, the graph optimization approach generally yields better performance in multi-sensor integration.

This article focuses on a graph construction method using GNSS only, unaided by inertial navigation, to improve positioning accuracy in a multipath environment. Robustly and accurately estimating the position of a vehicle by adding a new type of constraint in the graph optimization uses a single receiver with time-relative real-time kinematic (TR-RTK) GNSS, a precise constraint between nodes of past and current nodes. In addition, adding clock biases between the multi-constellation GNSS as an estimated state and using a switchable constraint to exclude the observations of multi-GNSS pseudorange and Doppler frequency, which includes multipath errors, improves the positioning accuracy. Finally, the article evaluates this method’s accuracy using open GNSS datasets acquired in an urban environment.

Factor-Graph Optimization

Formally, a factor graph is a bipartite undirected graph that includes two types of nodes, variable nodes and factor nodes, and edges. A variable node presents the state variable Xi at i-th different moments. The edge connects the factor node and variable node and encodes the error function e(⋅), in which each edge corresponds to a single observation Zi. The optimization factor graph can be represented as:

1

where Ωi represents the information matrix of a constraint relating to the parameters Xi and determines the weight of an error function over the entire optimization problem. We consider the GNSS positioning problem as an optimization problem by constructing a factor graph from GNSS observations.

The accuracy of position estimation by factor-graph optimization depends significantly on the graph structure and the type of factors used. The graph structure shown in Figure 1 (A) is the simplest structure and directly uses the position and velocity computed by GNSS as edges. This structure is called “loosely coupling” (LC). On the other hand, the graph structure in Figure 1 (B) uses the pseudorange and Doppler observations from each GNSS satellite as edges. This structure is called “tightly coupling” (TC). As an edge between nodes, we can use the factor of dynamics such as velocity. In the case of TC, it is necessary to add the clock bias of the GNSS receiver to the estimated state. Compared to LC, TC can process the observations from each satellite independently, so outlier rejection methods such as switchable constraints can be applied to the observations of each satellite. Therefore, TC is expected to improve the positioning accuracy in multipath environments.

However, the accuracy of the pseudorange observations is at the meter level and is susceptible to multipath signals. Here we add centimeter-level TR-RTK-GNSS constraints to the TC-based graph structure using observations from a multi-GNSS constellation to achieve robust GNSS positioning in multipath environments.

We use GNSS pseudorange, Doppler, and carrier phase measurements as constraints of the factor graph. The node X estimated in epoch i is the 3D position and velocity in earth center earth fixed (ECEF) coordinates and multi-GNSS receiver clock biases and receiver clock bias rate:

2

where tgps,i is the receiver clock bias of the gps satellites, and tglo,i, tgal,i, and tbds,i are the system biases including the time bias of the GLONASS, Galileo and BeiDou relative to the GPS time. 

Figure 2 illustrates the structure of graph-based optimization using the proposed method. Four main factors exist: multi-GNSS pseudorange, Doppler, motion and TR-RTK-GNSS. n is the total number of estimated states, and m is the number of GNSS satellites. Unlike conventional TC, this graph structure has accurate TR-RTK-GNSS factors between distant nodes, making it possible to correct the accumulated error and estimate the accurate trajectory. We use switchable constraints to reject the multipath error of pseudorange and Doppler measurements. A set of switch variables S with components si is added to the state. The optimizer can find the best possible choice of each switch variable automatically. The switchable constraints can be thought of as an observation weight that is to be optimized concurrently with the state estimates. This proposed graph structure does not require other sensors such as an inertial measurement unit (IMU) or GNSS reference stations but is constructed using only a single GNSS receiver.

Pseudorange and Doppler Factors

Pseudorange and Doppler measurements of each satellite are used as constraints to estimate the position and velocity. We compensate the ionosphere and troposphere error in pseudorange using the Klobuchar model and Saastamoinen model as with the standard single point positioning. The satellite clock error is compensated using the broadcast ephemeris. In the case of the receiver clock error and different system biases, we included these biases in the nodes and estimated them.

For further detailed equations relating to the pseudorange, Doppler observations, ionospheric delay and motion factor (the relative constraint between time-series nodes), see the ION GNSS+ 2021 paper on which this article is based, at ion.org/publications/browse.cfm.

Screen-Shot-2022-03-21-at-1.45.12-PM

TR-RTK-GNSS Factor

This study attempts to construct loop-closure edges using the TR-RTK-GNSS technique. Previous GNSS pseudorange and carrier-phase measurements are stored and used to construct double-difference measurements between the past and current measurements. The baseline vectors pointing from the past epoch to the current position are determined exactly. We use these baseline vectors as the loop-closure edges in the graph. The GNSS carrier phase measurement of satellite k in epoch i is denoted as Φik. Φik is given by:

3

where rik is the range from the receiver to the satellite k, λ is the signal wavelength, Nik is the integer ambiguity, c is the speed of light, δti and δTik are the receiver and satellite clock biases, respectively, Iik and Tik are the ionosphere and troposphere errors, and εik is the carrier phase measurement noise. In general, when two receivers are used, mutual error terms exist in the measurements from the same satellite. The ionosphere, troposphere and satellite clock bias errors may be eliminated by differencing the base and rover receiver measurements. In contrast, we consider the time-differential carrier phase using a stand-alone GNSS receiver. (Further discussion will be found in the ION GNSS+ paper cited earlier.)

The GNSS carrier phase measurements are biased by an unknown integer ambiguity. This ambiguity will be a time-invariant constant, provided that there is a continuous phase lock to the respective GNSS satellite in the receiver phase lock loop. However, the GNSS carrier phase measurements are sensitive to signal shadowing. In urban environments, tall buildings and overpasses are likely to cause complete signal obstruction. In these situations, cycle slip of the GNSS carrier phase occurs, and the integer ambiguity will not be a time-invariant constant. For this reason, we do not eliminate integer ambiguity in the time difference computation.

The form of the observed equation of TR-RTK-GNSS is similar to that of normal RTK-GNSS. Therefore, the baseline vectors and ambiguity are estimated using the ambiguity estimation method used in RTK-GNSS. In general, the ambiguities are first fixed to float numbers using a Kalman filter or a least-squares method from the double-difference carrier phase measurements in single or multiple epoch. Then float ambiguities are corrected with the integer ambiguities, and the accurate baseline vector can be computed using the estimated integer ambiguities. We use the LAMBDA method to estimate the integer ambiguities, and the precise relative position between the past and current epochs can be estimated.

We create a combination of epochs within the past few minutes from the current epoch and execute the proposed method. If TR-RTK-GNSS can correctly estimate the carrier phase ambiguity as an integer, we add all the TR-RTK-GNSS constraints to the graph. We do not add the constraint if the integer carrier phase ambiguity cannot be solved by the LAMBDA method.

Screen-Shot-2022-03-21-at-1.45.23-PM

Optimization

We add all the constraints into the pose graph. We use the switchable constraints, and the switch value of each satellite is estimated simultaneously. This is shown in Figure 2. The switch variable takes a value between 0 and 1 and serves as a weight for the GNSS observations. When the pseudorange residuals become large, the switch’s value becomes small, and it is possible to automatically exclude the pseudorange and Doppler observations that contain multipath errors.

The objective of localization is to find the optimal state variables that guarantee the minimized aforementioned error function. We use the Dogleg optimizer to obtain the optimal solution of the aforementioned error function. Finally, we can estimate the accurate vehicle’s trajectory. The proposed method using TR-RTK-GNSS facilitates improving positioning accuracy in applications without using any additional sensors and data from a GNSS base station.

Experimental Setup

To confirm the effectiveness of the proposed method, we conducted kinematic positioning tests using a vehicle in actual urban environments. Figure 3 shows the setup of the positioning test and the travel route. Buildings standaround the travel route, and therefore the environment is susceptible to satellite masking.

For the evaluation, a position estimation system for land vehicles that uses a multiple-frequency GNSS receiver and high-grade IMU, was used to estimate the reference positions as a ground truth. The accuracy of the absolute position estimation is centimeter accuracy according to the catalog specification. Therefore, it is accurate enough to be used to obtain the ground truth to compare the proposed methods. This experimental data is open to the public as “UrbanNav” open data [16], and can be used by anyone.

Figure 4 shows the actual locations of the GNSS satellites observed at locations “A” and “B” on the travel path shown in Figure 3. Here, G, R, J, E, and C indicate GPS, GLONASS, QZSS, Galileo, and BeiDou, respectively. Using ground truth location information, we extracted 3D information of the surrounding area from Google Earth, converted it into a virtual fisheye image, and projected the received satellite position. As shown in Figure 4 (B), part of the course is shielded by an elevated railroad. As can be seen in Figure 4 (A), multipath signals are received from satellites hidden behind buildings. These multipath signals have a significant impact on positioning accuracy.

Screen-Shot-2022-03-21-at-1.45.33-PM

Evaluation and Results

We compare the proposed graph optimization-based positioning error with the conventional single point positioning (SPP) result using the Kalman filter. We also compare the positioning error of general TC-based graph optimization. TC-based graph optimization uses multi-GNSS observations, and the TR-RTK-GNSS factor is removed from the proposed method. By comparing these positioning results, we evaluate the effectiveness of the proposed method.

We used the GTSAM for graph optimization backend, and RTKLIB for GNSS general computation. The proposed method is implemented using MATLAB, and the performance evaluation was conducted in post processing.

Figure 5 shows a visualization of the TR-RTK-GNSS factor in the graph structure constructed by the proposed method. The blue circles in the figure represent the nodes of the graph, and the red lines represent the edges with TR-RTK-GNSS Factor. The figure shows an enlarged view of a part of the course, where we can see that the constraints are added between distant nodes. Because the TR-RTK-GNSS constraints are added only when the carrier phase ambiguity is estimated to be solved correctly, some sparse constraints are added between the nodes. In environments where GNSS satellites are shielded, such as under an elevated railroad, no TR-RTK-GNSS constraint is generated. The black square area in the figure is the case where the vehicle passed under the elevated railroad. By solving the ambiguity of the carrier phase observation, the exact constraints between the nodes can be added before and after the passage of the elevated railroad by the TR-RTK-GNSS technique.

Figure 6 shows the histogram of the time differences of the fixed solution computed from the TR-RTK-GNSS technique. The maximum time difference was 95 s in this test, which was the limitation for the construction of the loop-closing constraint by the proposed TR-RTK-GNSS technique. However, the ability to create edges between distant nodes by solving for carrier phase ambiguity, as shown in Figure 5, will contribute to the higher accuracy of GNSS positioning in urban environments.

Figure 7 shows the horizontal positioning errors of each method. The blue line indicates the positioning error of SPP, and the green line indicates the proposed method. We can see the SPP has large errors in several ten meters because of the multipath. However, the proposed method can reduce the positioning errors. This result shows the multipath rejection scheme and TR-RTK-GNSS constraints of the proposed graph optimization allow us to estimate highly accurate positioning in urban environments.

Table 1 shows the positioning error of each method. Compared with the conventional SPP, the maximum positioning error is reduced from 88.2 m to 7.45 m by the proposed method. The horizontal root mean square error (RMSE) is also reduced from 10.38 m to 1.37 m. Compared to the TC-based graph optimization, the contribution of the TR-RTK-GNSS factor further improves the positioning accuracy.

Figure 8 shows the value of the estimated switch state of the GPS satellites after the optimization. Each line in the figure indicates the switch state of each satellite. This switch value indicates the GNSS measurement qualities. It is apparent that switch variables sometimes drop and close to zero. This supports the understanding that the optimization could clearly recognize the outlier measurements and distinguish them from the inliers. The only inlier pseudorange and Doppler measurements are used to estimate the vehicle positioning and velocity; as a result, the proposed method can estimate the accurate position in multipath environments.

Screen-Shot-2022-03-21-at-1.45.42-PM

Screen-Shot-2022-03-21-at-1.45.54-PM

Screen-Shot-2022-03-21-at-1.46.02-PM

Conclusion

A high-accuracy positioning method in urban environments with multipath uses a graph optimization method employing only GNSS. The method rejects outliers in multi-GNSS pseudorange and Doppler observations by using switchable constraints and constructs a graph that fully utilizes multi-GNSS by including the clock bias between multi-GNSS in the state. Furthermore, we added a constraint between distant nodes by TR-RTK-GNSS using the time difference of GNSS carrier phase observations to improve the GNSS positioning accuracy. The accuracy of the proposed method was evaluated using GNSS data sets in an urban environment, and the results show the proposed method has the highest accuracy compared to the general single point positioning, loose-coupled combined and TC combined methods.

In future research, we will work on how to optimize the proposed graph structure in real-time using incremental smoothing and mapping (iSAM) and other methods. This will allow us to use the proposed method for real-time position estimation, such as navigation for automated driving. In addition, by combining the proposed method with inertial navigation systems, we aim to improve the accuracy in environments where GNSS cannot be used, such as tunnels and under elevated tracks. 

Manufacturers

The POSLV from Applanix was used to estimate ground truth in the experiment.

References:

(1) N. Sunderhauf and P. Protzel, “Switchable constraints for robust pose graph SLAM,” in IEEE international conference on intelligent robots and systems, 2012, pp. 1879–1884. 

(2) D. Chen and G. X. Gao, “Probabilistic graphical fusion of LiDAR, GPS, and 3D building maps for urban UAV navigation,” Navigation, Journal of the Institute of Navigation, vol. 66, no. 1, pp. 151–168, Jan. 2019.

(3) N. Sünderhauf, M. Obst, G. Wanielik, and P. Protzel, “Multipath mitigation in GNSS-based localization using robust optimization,” in IEEE intelligent vehicles symposium, 2012, pp. 784–789.

(4) W. Li, X. Cui, and M. Lu, “A robust graph optimization realization of tightly coupled GNSS/INS integrated navigation system for urban vehicles,” Tsinghua Science and Technology, vol. 23, no. 6, pp. 724–732, Dec. 2018.

(5) R. M. Watson and J. N. Gross, “Robust navigation in gnss degraded environment using graph optimization,” in 30th international technical meeting of the satellite division of the institute of navigation, ION GNSS 2017, 2017, vol. 5, pp. 2906–2918.

(6) W. Wen, T. Pfeifer, X. Bai, and L.-T. Hsu, “Factor graph optimization for GNSS/INS integration: A comparison with the extended kalman filter,” NAVIGATION, vol. 68, no. 2, pp. 315–331, 2021.

(7) T. Suzuki, “Time-relative RTK-GNSS: GNSS loop closure in pose graph optimization,” IEEE Robotics and Automation Letters, vol. 5, no. 3, pp. 4735–4742, 2020.

(8) F. R. Kschischang, B. J. Frey, and H. A. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Transactions on Information Theory, vol. 47, no. 2, pp. 498–519, 2001.

Author

Taro Suzuki is a chief researcher at Chiba Institute of Technology, Japan. He received his Ph.D. in engineering from Waseda University, worked as a postdoctoral researcher at Tokyo University of Marine Science and Technology and as an assistant professor at Waseda University. His current research interests include GNSS precise positioning in urban environments. Another ION GNSS+ 2021 paper of his, ” Global Optimization of Position and Velocity by Factor Graph Optimization,” won first place in the Google Smartphone Decimeter Challenge.

The post Graphing a Way out of Multipath: Robust Navigation for Autonomous Vehicles and Robots appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>
Q: What is the future of autonomous vehicles? https://insidegnss.com/q-what-is-the-future-of-autonomous-vehicles/ Wed, 30 Mar 2022 03:55:50 +0000 https://insidegnss.com/?p=188689 The concept of autonomous driving has generated a lot of interest and attention in the past decades as it is believed to provide...

The post Q: What is the future of autonomous vehicles? appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>

The concept of autonomous driving has generated a lot of interest and attention in the past decades as it is believed to provide numerous benefits for individuals and society: increased road safety, reduced traffic congestion, accidents and death, and saving time and pollution on commuting. There is a lot of ongoing work on this topic, much research and many experiments have been conducted on how to make cars learn the environment, make human-like decisions and drive on their own.

DI QUI, Polaris Wireless

A: Today, we see a lot of self-driving vehicles experimenting on the roads in a controlled environment under the supervision of a human driver in good road and environment conditions. Researchers forecast that by 2025 [1], there will be approximately 8 million autonomous or automated vehicles on the road. Before merging onto roadways, self-driving cars will first have to progress through six levels of driver assistance technology advancements.

The Society of Automotive Engineers (SAE) defines six levels of driving automation ranging from 0 to 5 [2]. These levels have been adopted by the U.S. Department of Transportation (DoT).

Level 0—No Driving Automation

Today, most vehicles on the road are at this level and manually controlled. The human provides the dynamic driving task and there are systems in place to help the driver. Information sources that include GNSS, vehicle motion sensors, and road maps are integrated using an information fusion algorithm. Such map-matched information together with traffic situation information are provided to the human driver for assistance and guidance. An example of Level 0 automation would be the emergency braking system. 

Level 1—Driver Assistance 

Being the lowest level of automation, the vehicle features a single automated system for driver assistance, such as steering, accelerating or cruise control. Adaptive cruise control, where the vehicle can be kept at a safe distance from the surrounding cars, qualifies as Level 1 automation. The human driver monitors and controls the other aspects of driving. 

Level 1 automation and beyond uses a multisensory platform that includes camera, radar, LiDAR, GNSS, inertial measurement units (IMUs) and inertial navigation systems (INS) and ultrasonic sensors. Ultrasonic sensors are available and widespread for parking but they are of minor importance for autonomous driving. LiDAR systems today are rarely used in serial production because of cost and availability. Camera and radar are prerequisites for all further levels of automation.

Level 2—Partial Driving Automation 

Vehicles with Advanced Driver Assistance Systems (ADAS) can control both steering and accelerating/decelerating. At this level, a human driver sits in the driver’s seat and can take control of the car at any time. Examples of this level of automation include Tesla Autopilot and Cadillac (General Motors) Super Cruise systems. 

Absolute localization is computed from GNSS, PPP or RTK, IMU and odometry. Perception sensors like cameras, radar, and/or LiDAR are used to obtain relative localization. Both absolute and relative localization outputs together with path planning are integrated for steering control, acceleration and brake control.

Level 3—Conditional Driving Automation

Level 3 vehicles have environmental detection capabilities and can make informed decisions, such as assisting with an emergency stop by decelerating and stopping the vehicle while alerting surrounding cars. Human override is still required at this level. The driver must remain alert and ready to take control if the system is unable to execute the task.

A feature such as traffic jam pilot is a good example of Level 3 automation. The system handles all acceleration, steering and braking while the human driver can sit back and relax. Starting from Level 2 and beyond, real-time high-precision GNSS absolute positioning solutions are required to achieve lane-determination with or without a local base station network. Sub-lane level accuracy unlocks many autonomous driving features, enables planning beyond perception limits, thus improving system integrity and safety assurance [3]. 

In 2017, Audi announced the world’s first production Level 3 vehicle, Audi A8, which features Traffic Jam Pilot [4]. However, the A8 is still classified as a Level 2 vehicle in the United States because the U.S. regulatory process shifted from federal guidance to state-dictated mandates. 

The vehicle retains the same architectures that was compatible with Level 3 autonomy but lacks the redundancy systems to completely take over driving duties. Mercedes-Benz has become the first automotive company in the world to meet the necessary requirements (granted by UN-R157) for international approval of Level 3 automation [5]. The Mercedes-Benz Drive Smart Level 3 autonomous driving feature will become available in Europe early this year. The road-legal certification in the U.S. has not been finalized yet. 

Level 4—High Driving Automation

The key difference between Level 3 and Level 4 automation is Level 4 vehicles can intervene if there is a system failure. The vehicles do not require human interaction in most circumstances. A human driver still has the option to manually override. Until legislation and infrastructure evolve, self-driving mode can only be used in limited areas, such as geofence-defined areas. 

High-accuracy maps and highly reliable localization are critical to achieving a Level 4 system. Localization failures usually trigger an emergency stop. Current Level 4 systems use LiDAR, cameras and radar for both perception and localization, primarily relying on LiDAR for localization. GNSS is not the primary location sensor because of availability and integrity challenges. 

According to the German Transportation Ministry [6], autonomous vehicles and driverless buses are set to make their debut on German public roads after lawmakers approved a new law on autonomous driving in June 2021. 

The law intends to bring autonomous vehicles at the SAE level 4 into regular operation in early 2022. More and more companies are showing promise with Level 4 automation. Waymo has partnered and collaborated with truck builder Daimler to develop a scalable autonomous truck platform that is intended to support SAE Level 4 [6]. 

Level 5–Full Driving Automation

Level 5 vehicles do not require human attention; dynamic driving task is eliminated. Level 5 cars will not have steering wheels or acceleration/braking pedals. They will be free from geofencing and are able to go anywhere an experienced human driver can. Fully autonomous vehicles are still undergoing testing, with none available to the general public yet. 

For a long time, the SAE standard of 5 levels of autonomous driving has been the norm. However, Germany’s Federal Highway Research Institute presented an alternative and suggested switching from 5 levels to 3 modes to simplify the discussion on autonomous vehicles [7]. They are: 

• Assisted mode—The first mode combines SAE Level 1 and 2, where the human driver is supported by the vehicle but must remain alert and ready to intervene at all times.

• Automated mode—The second mode incorporates SAE Level 3. The human driver temporarily hands over the steering wheel to the software/computer and can perform other activities for a more extended period.

• Autonomous mode—The third mode combines the SAE Level 4 and 5. The vehicle has the complete driving authority.

What are the challenges of autonomous vehicles?

Many people are still curious about how to design an autonomous or driverless vehicle system that is capable of handling a vehicle’s performance like a human in all possible conditions. An autonomous vehicle is a combination of sensors and actuators, sophisticated algorithms, and powerful processors to execute software. There are hundreds of such sensors and actuators that are situated in various parts of the vehicle, driven by a highly sophisticated system. 

There are different categories of sensory systems in autonomous vehicles: 1) navigation and guidance sensors to determine where you are and how to get to a destination; 2) driving and safety sensors like cameras to make sure the vehicle acts properly under all circumstances and follow the rules of the road; and 3) performance sensors to manage the vehicle’s internal systems, such as power control, overall consumption and thermal dissipation. 

While autonomous vehicle systems may vary slightly from one to another, the core software generally includes localization, perception, planning and control. A perception system senses, understands and builds a full awareness of the environment and the objects around it using cameras, LiDAR and radar sensors. A planning software is responsible for path planning, risk evaluation, task management and path generation. Machine learning (ML) and deep learning (DL) techniques are widely used for localization and mapping, sensor fusion and scene comprehension, navigation and movement planning, evaluation of a driver’s state and recognition of a driver’s behavior patterns, and intelligence learning for perception and planning. A mapping software can be used to generate and update high definition lane-level map data through collected sensor data and posting processing. 

Sensor calibration serves as a foundation for an autonomous system and its sensors. It is a requisite step before implementation of sensor fusion, localization and mapping, and control. Sensor calibration learns about each sensor’s position and orientation in real-world coordinates by comparing the relative positions of known features the sensors detect. 

Sensor fusion is one of the essential tasks that integrates information obtained from multiple sensors to detect outliers and reduce the uncertainties from data from each individual sensor, thus enhancing accuracy, reliability and robustness. There are three levels of fusion approaches: high-level/decision level, mid-level/feature level, and low-level/raw data level for both perception and localization systems [9]. 

In decision-level fusion, each sensor carries out detection or tracking algorithms separately and subsequently combines the results into one global decision. Feature-level fusion extracts contextual descriptions or features from each sensor data and fuses them to produce a fused signal for further processing. In raw data-level fusion, sensor data are integrated at the abstraction layer for better quality and low latency. 

Each fusion level has its strengths and weaknesses in terms of accuracy, complexity, computational load, communication bandwidth and fusion efficiency. Commonly used fusion algorithms are statistical methods, probabilistic methods such as Kalman Filter and Particle Filter, knowledge-based theory methods, and evidence reasoning methods. Environmental perception map is built from the information coming from obstacle, road, vehicle, environment and driver. Localization is commonly performed using GNSS, IMU, cameras and LiDAR. 

Emerging studies propose different approaches to avoid the need for localization and mapping stages, and sense the environment to produce an end-to-end driving decisions [10]. The three localization techniques [9] used in autonomous driving are: 

• GNSS/IMU-based localization together with DGPS and RTK to ensure the continuity of GNSS signals, 

• visual-based localization that includes simulataneous location and mapping (SLAM) and visual odometry, and 

• map-matching-based localization that uses “a priori maps.”

The comparison of different localization techniques on accuracy, cost, computational load, source of external effects, and the storage size of data are shown in Table 1. Studies that include GNSS/IMU as part of the localization system in autonomous driving mostly use probabilistic methods, which represent decision-level to feature-level of sensor fusion.

The following five major challenges of self-driving cars require continuing research and development effort. 

• Sensors. Sensors in autonomous vehicles map the environment and feed data back to the car’s controls system to help make decisions about where to steer or when to brake. A fully autonomous vehicle needs accurate sensors to detect objects, distance, speed and so on under all conditions and environments. Lousy weathers, heavy traffic and unclear road signs can negatively impact the accuracy of the LiDAR and the camera’s sensing capability. 

Another potential threat is radar interference. When on the road, a radar on a car will continuously emit radio frequency waves, which get reflected from the surrounding cars and other objects near the road. When this technology is used for hundreds of vehicles on the road, it is challenging for a car to distinguish between its own (reflected) signal and the signal (reflected or transmitted) from another vehicle. Given that there are limited radio frequencies available for radar, it is unlikely to be sufficient for all the autonomous vehicles manufactured. Although GNSS has the advantages of worldwide coverage, all-weather operation, providing absolute positions without map or road marking information, its overall accuracy as well as availability has been a concern for fully autonomous systems. 

• Machine learning. Most autonomous vehicles use AI or ML to process the data from its sensors to better classify objects, detect distance and movement, and help make decisions about the next actions. It optimizes and better integrates different sensor outputs with a more complete picture. It is expected that machines will be able to perform detection and classification more efficiently than a human driver can. As of now, it is not widely accepted and agreed on that the machine learning algorithms are reliable under all conditions [11]. There is a lack of agreement across the industry on how machine learning should be trained, tested or validated. 

• The open road. An autonomous vehicle continues to learn once it is on the road. It detects objects that have not come across in its training and updates the software. We need a mechanism or an agreement from the industry to ensure any new learning is safe. 

• Regulation. Sufficient standards and regulations for a fully autonomous system do not exist. Current standards for the safety of existing vehicles assume the presence of a human driver to take over in an emergency. For autonomous vehicles, there are emerging regulations for particular functions, such as automated lane keeping systems. Without recognized regulations and standards, it’s risky to allow autonomous cars to drive on the open road. 

• Social acceptability. Social acceptance is not just an issue for those willing to buy an autonomous car, but also for others sharing the road with them. The public is important factor in involving in decisions about the introduction and adoption of autonomous vehicles. 

Screen-Shot-2022-03-21-at-1.31.50-PM

What are the emerging regulations for autonomous vehicles?

In the U.S., federal car-safety regulation is based on Federal Motor Vehicle Safety Standards (FMVSS) [12]. These regulations establish detailed performance requirements for every safety-related part of a car. Before a car can be introduced into the market, the manufacturer must certify that the vehicle is as safe as cars already on the road. Federal regulations do not say much about how companies develop and test cars before bringing them to market. The federal government is providing nonbinding guidance, an appropriate approach in this environment of uncertainty.

State regulations concerning autonomous vehicle testing on public roads and their changes in law vary widely. States such as California and New York are strict, requiring companies to apply for a license to test vehicles on the road. At the other extreme, no company needs a license to operate autonomous vehicles in Florida. As of 2021, there are 29 states plus D.C. that have passed legislation, 10 states requiring executive orders made by their governors, nine states that have laws pending or that have failed during the voting process, with the remaining states taking no action at all. [13]. 

The current regulatory landscape for autonomous driving is reasonably friendly to the development of the technology. States are experimenting with different levels of regulation. Competition among the states will allow auto manufacturers to locate to the state that best suits their experimental program. The competition and experimentation across states would help and encourage the best approach to regulation to emerge over time. 

Internationally, 60 countries have reached a milestone in mobility with the adoption of a United Nations regulation that will allow for the safe introduction of automated vehicles in certain traffic environments [14]. The UN regulation, Level 3 automation, establishes strict requirements for Automated Lane Keeping Systems (ALKS). ALKS can be activated under certain conditions on roads where pedestrians and cyclists are prohibited and are equipped with a physical separation that divides the traffic moving in opposite directions. The speed limit of ALKS systems is 60 km/h. The regulation includes the obligation for car manufacturers to introduce Driver Availability Recognition Systems to detect and control the driver’s presence, and to equip the vehicle with Data Storage System for Automated Driving (DSSAD) to record when ALKS is activated. 

How are automated and autonomous vehicles being accepted by the public? 

Fully autonomous vehicles have the potential to enhance safety by nearly eliminating human-related elements and errors that affect drivers performance, such as aging, disease, stress, fatigue, inexperience or drug abuse. However, there are several individual and social concerns with regard to autonomous vehicle deployment: the high costs of maintenance expenses, a possible rise in fuel consumption and carbon dioxide emissions from increased travel demand, legal and ethical issues relating to the protection of users and pedestrians, privacy concerns and the potential for hacking, and loss of jobs for alternative transportation providers. 

It is argued that the biggest barrier to widespread adoption of autonomous driving is psychological, not technical [11]. User acceptance of autonomous driving is essential for autonomous driving to become a realistic part of future transportation. The definition of user acceptance is not standardized as there are many different approaches to determine and model user willingness to accept autonomous vehicles. 

Knowledge of public acceptance of autonomous driving is limited; more research is required to understand the psychological determinants of user acceptance. Influencing factors may include trust of accurate autonomous technology, personal innovativeness, a degree of anxiety that might be caused by relinquishing control of driving, privacy concerns related to individual location data, and the high cost of precise sensor systems for wireless networks, navigation systems, automated controls and system integration [12, 15, 16]. 

References

(1) ABI Research. “ABI Research forecasts 8 million vehicles to ship with SAE Leve 3, 4 & 5 autonomous technology in 2025.” www.abiresearch.com

(2) U.S. Department of Transportation. “Federal automated vehicle policy—Accelerating the next revolution in roadway safety.” www.transportation.gov. 

(3) N. Joubert, T. Reid, and F. Noble, “A survey of developments in modern GNSS and its role in autonomous vehicles.” www.swiftnav.com. 2020. 

(4) P. Ross, “The Audi A8: the world’s first production car to achieve Level 3 autonomy.” https://spectrum.ieee.org/. Jul. 2017. 

(5) “Mercedes-Benz becomes world’s first to get Level 3 autonomous driving approval.” https://auto.hindustantimes.com/. Dec. 2021. 

(6) J. Gesley, “German: Road traffic act amendment allows driverless vehicles on public roads.” www.loc.gov. Jul. 2021. 

(7) “Daimler truck to build scalable truck platform for autonomous driving.” www.fleetowner.com. Dec. 2021.

(8) Ashurst. “How Germany is leading the way in preparing for driverless cars.” Aug. 2021.

(1) J. Fayyad el al, “Deep learning sensor fusion for autonomous vehicle perception and localization: A review.” Sensors. Jul. 2020.

(10) C. Chen et al., “DeepDriving: Learning affordance for direct perception in autonomous driving.” Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Chile. 7–13 December 2015.

(11) L. Hsu, “What are the roles of artificial intelligence and machine learning in GNSS positioning.” Inside GNSS/GNSS Solutions. Nov./Dec. 2020. 

(12) National Highway Traffic Safety Administration. “Preliminary statement of policy concerning automated vehicles.” Feb. 2022. 

(13) National Conference of State Legislatures. “Autonomous vehicles | self-driving vehicles enacted legislation.” www.ncsl.org. Feb. 2020.

(14) The United Nations Economic Commission for Europe. “Uniform provisions concerning the approval of vehicles with regard to automated lane keeping systems.” https://unece.org/. Mar. 2020. 

(15) A. Shariff et al., “Psychological roadblocks to the adoption of self-driving vehicles.” Nature Human Behavior. 2017. 

(16) S. Nordhoff et al., “Acceptance of driverless vehicles: Results from a large cross-national questionnaire study.” Journal of Advanced Transportation. 2018.

The post Q: What is the future of autonomous vehicles? appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>
Multiple Imaging Radars Integrate with INS/GNSS via AUTO Software Reliable and Accurate Positioning for Autonomous Vehicles and Robots https://insidegnss.com/multiple-imaging-radars-integrate-with-ins-gnss-via-auto-software-reliable-and-accurate-positioning-for-autonomous-vehicles-and-robots/ Mon, 14 Mar 2022 04:43:24 +0000 https://insidegnss.com/?p=188550 Difficult GNSS environments and adverse weather conditions require a fusion of many sensors to maintain lane-level accuracy for autonomous platforms, without incurring high...

The post Multiple Imaging Radars Integrate with INS/GNSS via AUTO Software Reliable and Accurate Positioning for Autonomous Vehicles and Robots appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>

Difficult GNSS environments and adverse weather conditions require a fusion of many sensors to maintain lane-level accuracy for autonomous platforms, without incurring high costs that would inhibit widespread adoption. Radars are an attractive option in a multi-sensor integration scheme due to being robust to adverse weather and insensitive to lighting variations.

The multi-radar integrated version of AUTO uses inertial navigation, real-time kinematic GNSS, odometer, and multiple radar sensors with high-definition maps in a tight non-linear integration scheme. AUTO can reliably produce accurate and high-rate navigation outputs in real-time and under all urban environments. Key performance indices during simulated GNSS outages quantify the accuracy of the solution for prolonged periods.

DYLAN KRUPITY, ABDELRAHMAN ALI, BILLY CHAN AND MEDHAT OMR, TRUSTED POSITIONING INC.

JONATHAN PREUSSNER AND ARUNESH ROY, UHNDER INC.

JACQUES GEORGY AND CHRISTOPHER GOODALL, TRUSTED POSITIONING

Solving the localization problem is a crucial step in enabling the development of autonomous platforms. To achieve high levels of autonomous driving, the positioning solution must be highly accurate, precise, reliable in all environments, and always available. This presents a challenge in areas where GNSS signals may be degraded or denied. Achieving autonomy will have a wide range of safety, social and economic benefits.

Sensor fusion, for example integrating GNSS with inertial navigation systems (INS) and odometry, can make use of the complementary strengths and weaknesses of different sensors, providing the best solution given all available measurements. Real-time kinematic (RTK) GNSS is a valuable and accurate source of information, providing up to centimeter level absolute position updates in areas with reliable RTK coverage. However, GNSS still suffers from multipath and occlusions in deep urban canyons, underground parkades, and tunnels. INSs are always available as fully self-contained sensors, and provide high-rate solutions, over 100 Hz, but drift over time. This is especially significant for low-cost micro-electromechanical systems (MEMS) based sensors, GNSS updates can correct for these errors, while the INS can bridge GNSS gaps. Unfortunately, in harsh urban environments, the extended periods of signal degradation or unavailability of GNSS makes it very challenging to maintain lane-level positioning. As a result, perception sensors are being explored as another independent source of information that can be integrated into the fusion stack. Common perception sensors include cameras and lidar. They can provide detailed information of a scene, which is very useful for object detection or map matching. However, cameras are affected by poor lighting from backlit objects and scenes, and adverse weather is a weakness of both sensors.

Radar does not suffer from these inherent weaknesses and can provide reliable measurements regardless of weather conditions. The drawback of radar, particularly from a map-matching perspective, is the sparseness of the data and a lower angular resolution than that of lidar. Recent advances in technology have mitigated some of these limitations, making radar a viable option for localization. State-of-the-art imaging radars can produce high-resolution information at long range on multiple dynamic targets with a high update rate, even in a cluttered scene. The measurements are in a 4D domain that includes range, doppler, azimuth, and elevation. 

AUTO consists of the fusion of INS, GNSS, odometer, and radar with high-definition (HD) maps in a tightly integrated solution to provide lane-level accuracy. AUTO software flexibly supports multi-radar configurations to achieve up to 360-degree horizontal coverage of a scene to enhance the positioning accuracy. In addition to using the radar system for localization, radar mapping is also possible when using crowd-sourcing techniques.

AUTO is a real-time integrated navigation system that provides an accurate, reliable, high-rate, and continuous (always available) navigation solution for autonomous vehicles and robotic platforms. The software leverages several patents in its tight non-linear integration scheme to fuse information from multiple imaging radars with the INS/GNSS/odometer solution.

INS is always available and supports a high-rate output of 100 Hz. Furthermore, MEMS-based sensors can be obtained at affordable prices with high volume availability. Based on these benefits, INS is used as the core for AUTO navigation. Figure 1 shows an overview of the AUTO system and the input information. Accurate system-level time synchronization is performed to compensate for sensor latencies. Through the tight integration scheme, the AUTO solution can output a full 3D navigation solution at 100 Hz, using all available measurements to compute the best possible solution, even though some sensors have lower update rates.

AUTO provides high-accuracy and -reliability positioning at scalable cost for autonomous land-based platforms. The integrated solution can perform in a competitive manner comparable to more costly high-end systems, achieving a high-rate solution with decimeter-level accuracy under all environments and conditions. With the array of sensors and redundant information available to the system, it can also be used for integrity monitoring applications. The self-contained nature of the INS always allows the system to fall back on INS positioning in case other sensors fail. AUTO is optimized to use automotive-grade MEMS IMUs among its components:

• MEMS IMU

• High-accuracy GNSS (RTK/PPP)

• Imaging radar(s)

• Odometer/Vehicle Speed (DMI/CAN/OBD-II)

• Barometer

AUTO Sensor Fusion and Features

Figure 2 shows a different view of the main components of the AUTO system including fused inputs such as: IMU, GNSS, speed, HD maps, and radar detections (yellow) in a scene.

Supported speed readings may come from the on-board diagnostics (OBD-II) port, distance measuring instrument (DMI), vehicle speed sensor (VSS), or the vehicle’s CAN bus. This means the vehicle speed input to AUTO can be conveniently and economically obtained from the vehicle’s existing odometry sensors. The measured speed can be affected by various factors that include pressure, temperature, and wear. This introduces an odometer scale factor which AUTO can continuously estimate in real-time and correct for any errors in the measured speed.

AUTO can provide ego velocity information updates to each of the radars in real-time, regardless of their relative orientation with respect to the vehicle body frame or the IMU frame. The ego velocity information enables the system to get reliable radar measurements of static objects in the background. Hig- accuracy RTK GNSS and radars with HD maps provide a source of absolute positioning inputs that AUTO can use to self-calibrate. The multi-radar support enables a more feature-rich scan of the scene, improving the map-matching and localization process. In addition, AUTO uses techniques for very accurate time synchronization across all sub-systems and sensors to achieve the best possible positioning accuracy. This tightly integrated navigation solution can continuously estimate, calibrate, and correct for mounting misalignments of the inertial sensors in real-time, including roll, pitch, and heading misalignments.

Screen-Shot-2022-02-07-at-10.38.29-PM

Screen-Shot-2022-02-07-at-10.38.34-PM

Real-Time Software and Reference Design

The AUTO software can process all the sensor inputs and compute the integrated solution at 100 Hz in real-time. AUTO is highly optimized to run efficiently on application processors with different operating systems. Real-time testing of the software was based on an NVIDIA Jetson AGX Xavier system which is used in the evaluation kit shown in Figure 3. It consists of an ARM Cortex A57v processor with 8GB of RAM.

Screen-Shot-2022-02-07-at-10.38.46-PM

Screen-Shot-2022-02-07-at-10.38.53-PM

Imaging Radar

To obtain the maximum benefit from radar-based localization, high resolution digital imaging radars by Uhnder are used to produce detailed scans of the environment. Uhnder offers a fully software-defined digital imaging radar on a single chip (RoC). Unlike traditional automotive radar, which uses an analog frequency modulated continuous wave (FMCW) architecture, Uhnder uses a digital code modulation (DCM) architecture with 192 virtual receive channels. This digital architecture results in 16 times better angular resolution, 24 times more power on target, and 30 times better contrast compared to analog radar, allowing it to detect objects out to 300 meters. Precise time synchronization, fine angular resolution and high-contrast resolution (HCR) are critical for localization. The digital radar detects and resolves both static and dynamic objects, whether small or large, when they are located in close proximity to each other, greatly improving positioning accuracy, reliability, and integrity across all weather conditions and environments.

Interference robustness and mitigation is an important consideration for perception sensors used for localization, especially in heavy traffic and urban environments. The number of radars is growing exponentially as new vehicles are being equipped with more advanced driver assistance systems (ADAS) and automated driving system (ADS) functions. In the DCM architecture, every transmitter is identified by one of a quintillion (1018) unique codes. This minimizes mutual interference between digital radars. Uhnder also incorporates additional interference mitigation technology into its chips to suppress impacts from traditional FMCW analog radar interference. 

Table 1 provides a summary of the configuration and specifications for the Uhnder RoC sensor that was used for AUTO testing. 

Screen-Shot-2022-02-07-at-10.39.01-PM

Screen-Shot-2022-02-07-at-10.39.17-PM

INS/GNSS + Radar Vehicle Test Setup

The AUTO system vehicle test setup is shown in Figure 4. Note the five imaging radars mounted to the roof of the vehicle. This configuration consists of four corner-mounted radars, plus one forward facing radar at the front. This configuration gives a combined horizontal coverage of 360 degrees, with significant overlapping horizontal field of view between the radars. The evaluation kit that contains the rest of the system hardware and components is set in the trunk of the vehicle.

Table 1. lists the radar configurations and specifications used in testing. Note that our testing used a range resolution of 30 centimeters with a range of 160 meters, although higher resolutions are available in different modes programmable by the software. The AUTO system was rigorously tested using this setup in multiple locations and seasons. This includes a variety of weather conditions, temperatures and times of day to ensure reliable and consistent performance.

The system was thoroughly tested and characterized using many different sensors to ensure reliable performance. Challenging GNSS environments considered for testing were downtown areas and urban canyons where natural outages can occur due to buildings, bridges, tunnels, and multipath. To provide statistically significant key performance indices (KPIs), simulated GNSS outages were introduced with various durations lasting up to 8 minutes.

Screen-Shot-2022-02-07-at-10.39.29-PM

Screen-Shot-2022-02-07-at-10.39.36-PM

Vehicle Positioning Results

These results show the positioning accuracy of the AUTO system by displaying the navigation solutions of one or more test trajectories in Google Earth. The data in Figure 5 shows natural GNSS multipath and outages, indicated by the green arrows, due to blockages from high buildings in Downtown Calgary and due to passing under bridges.

In Figure 6, a simulated GNSS outage is introduced after the first 200 seconds. Notice how the vehicle deviated by more than 36 meters by the end of the trajectory in the INS-only solution (in blue), while the tightly integrated INS/GNSS with radar solution (in red) maintained lane-level accuracy.

Screen-Shot-2022-02-07-at-10.40.05-PM

Screen-Shot-2022-02-07-at-10.40.15-PM

Vehicle Performance Consistency Results

To test the system’s performance consistency, multiple groups of data were collected. Figure 7 shows the tightly integrated INS/GNSS with radar results for 15 test trajectories that were collected at different times of day and night. A simulated GNSS outage is introduced after the first 90 seconds. In this test the driver used two different lanes of the street when turning left and heading south in section “a.” When moving north in section “b” the driver takes multiple different lanes of the road.

Figure 8 shows a similar test using 15 trajectories that were collected using a different route. A simulated GNSS outage is introduced after the first 300 seconds. In this test the driver used two different lanes of the street while heading south in section “a” of Figure 8. When moving north in section “b” the driver takes any one of the available north bound lanes. Note that the gap between the trajectories in section “b” occurs where the road is divided in a tunnel when passing under a bridge.

Screen-Shot-2022-02-07-at-10.40.23-PM

Screen-Shot-2022-02-07-at-10.40.33-PM

Key Performance Indices 

To statistically assess the performance of the navigation solution, KPIs were computed using multiple trajectories. Simulated GNSS outage intervals are introduced with 1, 2, 4 and 8-minute durations. During the simulated outages, the GNSS solution is not passed to the AUTO system. However, the reference solution uses all the GNSS readings without any simulated outages from another instance of an AUTO processing run. This means each trajectory is run five times: using simulated outages of 1, 2, 4 and 8 minutes, plus a reference run without outages. The error is calculated between the configuration of AUTO under assessment (with outages) and the reference (without outages).

A set of three tables are shown, which present the results using a configuration of one, three and four radars. All other parameters, configurations and components are kept the same. Table 2 shows the KPI results for the AUTO integrated radar and INS/GNSS solution of a group of data collected in Downtown Calgary. In this case, the datasets use one forward facing second-generation Uhnder radar (UR-HM1140-X).

Table 3 shows the KPI results for the integrated radar and INS/GNSS solution for the same test setup, except the datasets now use a three-radar configuration, with one forward facing radar and two backward facing radars mounted on the rear corners of the vehicle. The increased combined field of view of the additional radars from this configuration yields a noticeable improvement in the positioning accuracy.

Table 4 shows the KPI results for the integrated radar and INS/GNSS solution again; this time, the datasets use a four-radar configuration consisting of four radars mounted on the corners of the vehicle. Again, an improvement in the accuracy of the solution can be observed with the additional radar.

Screen-Shot-2022-02-07-at-10.40.40-PM

Screen-Shot-2022-02-07-at-10.40.50-PM

Results for Robot

The AUTO system has also been tested and verified to work on land-based robotic platforms. The system shown uses motor encoders to obtain a speed input for AUTO. An example of the TDK robot is shown in Figure 9. GNSS antennas can be seen on the top cover of the robot. Four radars are mounted in the corners of the robot; window openings above the radars allow for video camera recording of the environment. All results for the robot presented in this article use the UR-HM1010 radar sensors from Uhnder.

Test trajectories collected in Downtown Calgary demonstrate the lane-level positioning accuracy of the AUTO system for robot. In Figure 10, a simulated GNSS outage is introduced after the first 200 seconds. Notice how the robot deviated by more than 33 meters by the end of the trajectory in the INS-only solution (in blue), while the tightly integrated INS/GNSS with radar solution (in red) was able to maintain lane-level accuracy to successfully close the loop and maintain sidewalk-level accuracy.

Screen-Shot-2022-02-07-at-10.41.03-PM

Robot Performance Consistency 

Fifteen test trajectories were collected at different times of day and night in downtown Calgary to test the consistency in performance of the AUTO solution on the robot. Figure 11 shows the tightly integrated INS/GNSS with radar results. A simulated GNSS outage is introduced after the first 200 seconds. The driver follows the same route on the sidewalk around the block in each trajectory. As can be observed, all the solutions follow the sidewalk and close the loop with the start point, despite having no GNSS updates for most of the duration.

Screen-Shot-2022-02-07-at-10.41.13-PM

Robot KPI

The robot results are generated using the same methodology as for the vehicle. To reiterate, simulated GNSS outages are introduced with 1, 2, 4 and 8 minute durations, and are compared against a reference solution generated without any simulated outages. Table 5 shows the KPI results for a group of data collected in downtown Calgary. In this case, the datasets used four corner mounted UR-HM1010 Uhnder radars. Even though these results are with the first-generation radar, the errors are smaller than for vehicle due to the much lower average speed and distance travelled of the robot.

Screen-Shot-2022-02-07-at-10.41.19-PM

HD Map Support

AUTO uses radar for localization by performing map-matching of the radar scans to a globally referenced HD map. All the results shown previously in this article used a LiDAR map based on a pre-survey of the test area in downtown Calgary. AUTO can support a variety of input map types for localization, including both 2D and 3D maps, as well as different formats such as point cloud and occupancy grid maps. However, maps are not required to be derived from a LiDAR pre-survey. With the high-resolution 4D radar imagery, AUTO can generate radar-based maps through crowdsourcing techniques using its existing sensor suite without needing LiDAR. 

In this case, the INS/GNSS system is used for localization, while the radars are used for map building. The crowdsourcing makes use of multiple passes through the same survey area, preferably using different routes and different times of day to avoid occlusions and remove any non-permanent features like parked cars. The results are then aggregated to form a crowdsourced radar map. A multi-radar configuration consisting of at least three radars is especially useful for the crowdsourcing technique. Assuming a proper geometrical arrangement, a complete 360-degree horizontal field of view can be achieved for crowdsourcing. The crowdsourced maps can then be used in subsequent runs as a global reference map for localization purposes.

An example of a crowdsourced map of downtown Detroit is presented in Figure 12. This map was generated using a total of 94 trajectories collected over several days using a three-radar configuration (one forward-facing radar and two rear-corner radars). The streets, building walls, tunnels, and other features are clearly mapped. Note that the sparse detections for the road near the bottom left portion of the image are due to a high overpass that is not in the radar field of view in the elevation direction.

Once HD maps are crowdsourced and formed using radars, they can then be used as the HD map source for localization. This seamless process allows new areas to be mapped using the existing vehicle platforms that also perform the localization. This is especially useful for applications with new or changing environments, such as during road construction, sidewalk diversions, or for positioning vehicles within an active construction site.

Conclusion

AUTO can achieve accurate lane-level positioning in different environments, including harsh downtown areas and urban canyons, regardless of adverse weather conditions. The tight integration of radar and HD maps with INS/GNSS can provide a reliable, accurate, and continuous positioning navigation system capable of operating at 100 Hz, which is required for autonomous machine control. By leveraging multi-radar configurations, the system can make use of the expanded total horizontal field of view, thereby detecting more features and improving the positioning accuracy as compared to a single radar setup. In addition, a multi-radar configuration can provide detailed scans of the environment, enabling crowdsourcing techniques to be used to create radar-based maps for navigation. The AUTO system was shown to maintain lane-level and sidewalk-level positioning even during extended GNSS outages, demonstrating the potential for such a tight integration system for both vehicle and robotic platforms for a variety of applications. 

Manufacturers

The following are suggested automotive-grade IMUs for AUTO (accelerometer, gyroscope): InvenSense IAM-20680—AEC-Q100; IAM-20680-HT—AEC-Q100; IAM-20685—ASIL B. Some suggestions for industrial: InvenSense IIM-46230; IIM-46234.

The AUTO evaluation kit includes: TDK InvenSense IAM-20680-HT MEMS IMU, TDK InvenSense ICP-10101 MEMS barometer, u-blox ZED-F9P GNSS receiver, Atmel ARM Cortex M4 processor.

AUTO has been intensively tested using these devices: UR-HM1010 first-generation RoC sensor, UR-HM1140-X second-generation RoC sensor; IAM-20680, IAM-20680-HT and IIM-46234 MEMS IMUs mounted in the trunk; multiple GNSS receivers: u-blox ZED-F9P, u-blox NEO-M8P, and NovAtel Flexpak6 OEM628; different speed sensors (OBDII, DMI); ICP-10101 MEMS barometer.

In Figure 5, data was collected with a Novatel-Flexpak6-OEM628 GNSS receiver, IAM 20860 IMU, DMI for speed source, and one forward facing first-generation Uhnder radar (UR-HM1010).

Figures 6 and 7 show data from a u-blox ZED-F9P GNSS receiver, IIM 46234 industrial grade IMU, DMI for speed source, and one forward facing first-generation Uhnder radar

Figure 8 used the above suite and five UR-HM1140-X Uhnder radars.

Figures 10 and 11 used the u-blox NEO-M8P GNSS receiver, IAM-20680 IMU, motor encoder for speed source, and four UR-HM1010 Uhnder radars.

Authors

Dylan Krupity is a software designer at Trusted Positioning Inc., a TDK Group Company. He received his B.Sc. in geomatics engineering from the University of Calgary, Canada.

Abdelrahman Ali is a software algorithms manager at Trusted Positioning. He received his interdisciplinary Ph.D. in geomatics engineering and electrical and computer engineering from the University of Calgary.

Billy Chan is a software designer at Trusted Positioning. He received an M.Sc. in geomatics engineering from the University of Calgary.

Medhat Omr is a software algorithms manager at Trusted Positioning. He received an Ph.D. in electrical and computer engineering from Queens University, Canada.

Arunesh Roy serves as Uhnder’s senior director for advanced applications and perception.He earned his doctorate degrees in electrical engineering from Wright State University.

Jonathan Preussner is a radar applications engineering manager at Under. He received his M.S. degree in electrical engineering from the University of Florida.

Jacques Georgy is the senior director of navigation R&D at Trusted Positioning. He received his Ph.D. degree in Electrical and Computer Engineering from Queen’s University.

Christopher Goodall 
is the co-founder, managing director and president of Trusted Positioning. He has a Ph.D. in geomatics engineering from the University of Calgary.

The post Multiple Imaging Radars Integrate with INS/GNSS via AUTO Software Reliable and Accurate Positioning for Autonomous Vehicles and Robots appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>
EMCORE to Acquire L3Harris Space and Navigation Business https://insidegnss.com/emcore-to-acquire-l3harris-space-and-navigation-business/ Wed, 16 Feb 2022 00:16:22 +0000 https://insidegnss.com/?p=188300 EMCORE Corporation, a provider of advanced mixed-signal products that serve the aerospace & defense, communications, and sensing markets, announced that it has entered...

The post EMCORE to Acquire L3Harris Space and Navigation Business appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>

EMCORE Corporation, a provider of advanced mixed-signal products that serve the aerospace & defense, communications, and sensing markets, announced that it has entered into a definitive agreement to acquire the assets and liabilities of the L3Harris Space and Navigation Business for approximately $5 million in an all-cash transaction.

“L3Harris Space and Navigation designs and builds some of the most accurate Navigation products in the world. This acquisition expands our Fiber Optic Gyroscope (FOG) product portfolio into the Strategic Grade and Space-Qualified markets. We will also gain a technical team with a sterling track record of development and production of high-performance FOGs, Ring Laser Gyros (RLGs), and reaction wheels,” said Jeff Rittichier, President and CEO of EMCORE. “This acquisition further solidifies EMCORE’s position as one of the largest independent inertial navigation providers in the industry. This is an excellent fit strategically for EMCORE, bringing Space and Navigation’s strong brand, inertial technology, and important program wins. It also expands EMCORE’s market reach into launch vehicle and space satellite markets, both of which are seeing significant growth,” he added.

“The L3Harris Space and Navigation team will provide EMCORE with the capability to accelerate expansion into a true navigation-grade FOG business with superior performance and accuracy compared to competitors,” commented Albert Lu, Senior Vice President and General Manager, Aerospace and Defense for EMCORE. “Combining this business into EMCORE will allow us to provide customers with an expanded product suite that serves a broader range of requirements across both the tactical and navigation grade segments of the market.”

Highlights of the transaction are as follows:

• Expands EMCORE’s inertial navigation product portfolio and addressable market, accelerating growth and contributing additional revenue
• Includes Master Supply Agreements (MSAs) for the BoRG (Booster Rate Gyro) and TAIMU (Tri-Axial Inertial Measurement Unit) launch vehicle programs and creates partnership opportunities with L3Harris to expand our mutual business together
• EMCORE to be added as a preferred supplier to L3Harris divisions for future business opportunities
• Adds a complete set of capabilities to design and test for space applications
• Shock, vibration, and thermal shock measurement equipment
• X-ray capability and vacuum chambers
• Includes a large number of rate tables that can serve multiple product applications
• Expected to create material operating synergies in engineering, manufacturing, and sales
• Expected to be non-GAAP EPS accretive
Through the transaction, EMCORE will acquire all the intellectual property, and outstanding assets and liabilities of the L3Harris Space and Navigation business, including the 110,000 square foot leased production facility in Budd Lake, NJ. The consummation of the transaction is subject to customary closing conditions and is currently expected to close in the quarter ending June 30, 2022.

The post EMCORE to Acquire L3Harris Space and Navigation Business appeared first on Inside GNSS - Global Navigation Satellite Systems Engineering, Policy, and Design.

]]>