State of the Self-Driving Car: Where Are They Taking Us?
July 2025
Published on Impatient Optimists, a Spotify and YouTube Show
In 2013 i was a freshman in high school. I remember being wowed back then by the prospect of what Tesla and its ceo were promising. Self-driving, we were assured in 2015, is just a few years away. At the time, it was an amazing idea. The coolest thing that the fanciest cars could do then was to start themselves and park themselves.
It’s been a decade since then. While the technology has advanced, full self-driving remains out of reach for the vast majority of Americans.
Before getting too deep into the topic, it’s important to clarify what “self-driving” actually means. The term gets used loosely, but in reality, there’s a standardized framework developed by the Society of Automotive Engineers (SAE) that defines six levels of driving automation, from 0 to 5. These levels describe how much responsibility the vehicle handles versus the human driver.
- Level 0 – No Automation:
The driver controls everything. There may be basic alerts or emergency features like lane departure warnings or automatic emergency braking, but these don’t count as automation. - Level 1 – Driver Assistance:
The vehicle can assist with either steering or acceleration/braking, but not both at the same time. An example would be adaptive cruise control. The driver is still fully responsible and must stay engaged. - Level 2 – Partial Automation:
The car can control both steering and speed under certain conditions, like highway driving. The driver must monitor the system at all times and be ready to take over. Tesla’s Autopilot and Full Self-Driving beta, along with GM’s Super Cruise, fall into this category. - Level 3 – Conditional Automation:
The vehicle can drive itself in specific environments, like on a limited-access highway, and the driver does not have to monitor the system constantly. However, the driver must be available to take over when the system requests. Mercedes currently has the only certified Level 3 system on the road, and even that is limited to specific regions and traffic conditions. - Level 4 – High Automation:
The car can handle all driving tasks within a defined geofenced area without any driver involvement. If conditions fall outside of its operational domain (for example, bad weather or unmapped roads), the car will safely stop or request remote assistance. Waymo and Cruise currently operate at this level with their robotaxi services in limited cities. - Level 5 – Full Automation:
No human input is needed, ever. The car can drive anywhere a person could, under any conditions. There are no pedals or steering wheels. This is the theoretical end goal, but no company is close to this in real-world deployment.
Despite the branding and the C-suite magic, no consumer vehicle on the road today operates beyond Level 2. Tesla’s “Full Self-Driving” system, while marketed as advanced, still requires constant supervision and isn’t considered autonomous under the SAE framework.
That said, self-driving cars are no longer a speculative idea. They’re deployed in pilot programs, operating in public traffic, and actively shaping urban transport planning. The concept of autonomy is firmly in the public and regulatory conversation – it’s just not evenly distributed yet, and full adoption is still a technical and policy challenge.
As it turns out, there’s a lot more skepticism about this technology than you might believe. According to a 2023 AAA study, 60% of Americans said they’d be afraid to ride in a fully self-driving vehicle. Only 13% said they would trust it – though that’s up from 9% the year before These numbers are ticking upwards year after year.
Much of this hesitation likely stems from limited real-world exposure to the technology. Self-driving technology, once appropriately cheap and useful, will be another mass-market expectation, just like most people expect their cars to connect with their phones.
The hesitation is understandable. the self driving car isn’t like replacing wired headphones with sony bluetooths or airpods, or like replacing a motorola Razr with an iphone in 2010. Driving is personal – and unlike a phone upgrade, adopting this technology means placing your family’s safety in its hands.
Tesla tends to dominate the conversation, and for good reason. It’s done the greatest job of capturing public and investor interest about self driving cars. But for many years, companies like Waymo and Cruise (division of General Motors) have been operating robotaxi services in cities like Phoenix, San Francisco, and Los Angeles. And it’s not just D2C. Passenger cars also have B2B self driving. Aurora is a startup already overseeing self-driving semi-trucks on highways, and they’re expanding their driverless services to El Paso and Phoenix by the end of this year.
That brings us to a key point: these companies aren’t all solving the same problem the same way. They’re using different sensors, architectures, and philosophies.
I suggest we take a step back and look at what it takes to build a car that can see and understand the world well enough to drive itself
if you’re trying to teach a car how to see and understand the world well enough to drive itself safely, you need to give it sensors—its eyes and ears—but no single type of sensor gives you everything you need. So you start by thinking about the kinds of questions the car has to answer: What’s around me? How far away is that object on the road? Is it moving? What’s it doing?
The core design question becomes: what kind of physical signals can we reliably detect from the environment, and what information do those signals actually carry?
Approach 1 is to use cameras. Cameras operate in the visible light spectrum, roughly 400 to 700 nanometers in wavelength. They’re passive sensors – they don’t emit anything, they just receive reflected sunlight or headlights. They capture rich 2D color images that are excellent for semantic understanding. That means recognizing stop signs, reading traffic lights, distinguishing a pedestrian from a trash can. But from a physics standpoint, they lack any direct range data. Depth must be inferred through perspective, parallax (if using two cameras), or motion over time. These inferences are computationally expensive and can be unreliable, especially in poor lighting, high glare, fog, or visually ambiguous scenes.
Approach 2. Lidar. LiDAR, by contrast, uses near-infrared laser light, typically around 905 or 1550 nanometers. It’s an active system: it emits pulses of laser light and measures how long it takes for them to reflect back from surfaces. Because the speed of light is known (about 3×10⁸ meters per second), you can calculate the distance with high precision. LiDAR builds a direct, real-time 3D point cloud of the world. Each point representing a known distance and angle. This makes LiDAR ideal for mapping geometry: curbs, poles, other vehicles, pedestrians, even subtle road features. However, laser light interacts with the environment in predictable but sometimes problematic ways. Water droplets scatter light through Mie scattering, reducing range and adding noise. Dust and snow do something similar. And since LiDAR has a narrow field of view per beam, it must mechanically or optically sweep to build a full scene, introducing mechanical complexity and latency.
Finally, the third method is Radar, which works with much longer wavelengths – millimeter waves in the range of 1 to 10 centimeters. This gives radar the ability to penetrate fog, rain, and even thin materials like plastic or clothing. It’s also Doppler-capable, meaning it can detect motion by observing the frequency shift of the return signal, giving you not just position, but relative velocity. From a physics standpoint, the spatial resolution of radar is fundamentally limited by its wavelength and antenna size. You can’t get fine detail or object shape from radar, but it’s unmatched in robustness and motion detection. For example, it can spot a car coming at you in a dust storm when other sensors fail.
Designing a self-driving perception system means thinking in terms of signal physics: how light or radio waves propagate, reflect, scatter, and absorb in different conditions. Cameras give you high spatial resolution and semantic richness but are easily blinded. LiDAR gives you structured depth, but only under good atmospheric conditions. Radar is resilient and velocity-aware but imprecise in object shape. By combining these, and aligning their outputs in both space and time, you get a richer, more robust model of the world.
The real challenge is in sensor fusion: merging fundamentally different signal types into a coherent model. It’s not just a data problem; it’s a physics problem. You’re reconciling line-of-sight optical reflection, time-of-flight measurements, and Doppler shifts, all happening across different wavelengths and noise environments. The better your understanding of the physics behind each modality, the better you can design algorithms that trust the right sensor at the right time and ultimately build a system that can see AND understand.
It’s important to carefully understand these limitations, because there’s a human cost associated with failures of these systems. Back in 2018 a self driving uber hit and killed a woman who was crossing the street in arizona
Uber’s car saw Elaine Herzberg crossing the street with her bike using radar and lidar 6 seconds before the crash but the software didn’t recognize the shape as a pedestrian because she wasn’t in a crosswalk. It kept changing its guess between “car,” “bike,” and “unknown object” – and every time it changed, it forgot she was there before, so it couldn’t predict she was about to be hit.
Because of that, the car never slowed down.
Uber’s self-driving software failed to understand what it saw, didn’t act, and disabled systems that could have prevented the crash. The backup driver wasn’t paying attention.
More recently, in late 2023, General Motors’ Cruise robotaxi service hit a setback. One of its driverless cars in San Francisco was involved in a collision that ended up dragging a pedestrian, and there were reports of Cruise cars blocking emergency vehicles. Regulators responded by revoking Cruise’s license to operate driverless taxis in the city.
As you can imagine then, one other thing self-driving companies are cognizant of is the regulatory aspect. There’s no federal law that fully covers autonomous vehicles, so it’s a patchwork of state-by-state rules. Texas, for example, has removed virtually all guardrails. The state passed laws back in 2017 and beyond that allow driverless cars on public roads with almost no special permission needed. As long as the vehicle is registered, insured, and has tech to record data, it’s treated much like any other car. In fact, Texas forbids individual cities from making their own stricter rules.
California requires companies to get permits, report detailed data on disengagements (when the human had to take over).
Outside of the US, Beijing allows fully driverless taxisand buses on city roads this year. Companies like Baidu (with its Apollo Go service) and Pony.ai are rapidly expanding public robotaxi trials; for example, Baidu plans to deploy 1,000 robotaxis in Wuhan by the end of 2024.
All of this highlights a deeper truth: self-driving companies aren’t just battling engineering challenges – they’re navigating legal uncertainty, public trust, and the slow erosion of traditional norms around driving. One of the most under-appreciated shifts is what autonomy means for ownership. After all, part of the reason regulators are uneasy is because of accountability: who’s responsible when a driverless car hits someone? Is it the company, the engineer who wrote the code, or the city that approved it? When there’s no human behind the wheel, control – and therefore liability – shifts upward to the companies deploying these fleets.
This model, where a small number of powerful firms manage fleets of vehicles and the software that drives them, reinforces another broader trend. Many people today express a quiet discomfort with how ownership is slipping away. Millennials and Gen Z, in particular, are renting homes longer, subscribing to Spotify or Apple Music instead of buying albums, and accessing games through services like Xbox Game Pass rather than owning individual titles. Now, the same dynamic is taking hold in transportation. Companies like Uber and Lyft have already nudged us away from personal car ownership, and autonomous vehicle companies like Waymo and Cruise are doubling down on that shift.
Waymo and Cruise charge by the ride, just like a taxi or rideshare service. It’s a model that fits today’s AV landscape. Self-driving technology is expensive, and deploying it in fleet-based robotaxis allows those costs to be amortized over thousands of trips. These vehicles can operate nearly around the clock, maximizing their value and productivity, unlike personal cars, which sit idle over 90 percent of the time. Fleets also give companies tighter control over vehicle performance, safety updates, and the treasure trove of user data that flows in from every route taken. That data, they argue, helps improve efficiency and tailor the experience, though the value captured mostly benefits the company itself.
Tesla, by contrast, built its brand on the idea of personal ownership and autonomy. The dream was that one day, you could own a Tesla that would drive itself – taking you to work in the morning and then making money for you all day by autonomously shuttling other passengers. But despite years of promises, that vision remains a mirage. The technology still falls short, and timelines continue to slip. Recently, even Tesla has started talking more about fleet models, hinting that its long-term plan may look less like a car you own and more like a service you access.
Other companies are already preparing for that shift and believe they can beat Tesla at its own game. Players like Zoox (Amazon), Motional (Hyundai + Aptiv), and Apple’s rumored AV efforts are all eyeing a future of shared autonomy. They’re betting that the economics of scale, full-service platforms, and tighter ecosystem control will make robotaxi networks more viable than individually owned self-driving cars.
This transition could reshape cities. If fewer people own cars, the need for sprawling parking lots and garages could decline. In places like Dallas or Houston – where car culture defines the landscape – this may hurt parking operators and businesses that rely on personal vehicle access. But in dense urban areas with limited parking, fewer private cars could remove a key friction point, making it easier for people to spend time and money downtown. We may see less land devoted to storing idle vehicles and more space opened up for public use or development.
The experience inside the car will change too. As driving becomes optional, the cabin becomes more of a lounge or workspace. We’re already seeing larger touchscreens, fewer physical buttons, and integrated entertainment systems. In a self-driving world, the car becomes an extension of your digital life – emphasizing high-quality audio, immersive video, seamless connectivity, and minimal friction. You won’t need to focus on the road, so the system will focus on you. What was once a tool for movement becomes an environment for experience.
None of this will arrive overnight. There are still open questions around regulation, safety, reliability, and public trust. The rollout will likely be uneven, happening faster in some cities or countries than others, and limited to specific use cases before expanding. But the trajectory is clear. The convergence of better hardware, smarter AI, and large-scale infrastructure investment is steadily pushing us toward a world where driving is no longer a daily task, but an optional background function.
Understanding that future – how it works, who controls it, and what it means for everything from urban design to individual freedom – will be key not just for engineers and policymakers, but for anyone who participates in the way society moves.
Spring 2023
Published in INSIGHT – Indiana University School of Medicine Student Research Journal
[This short story describes my experience volunteering in a local pediatric emergency department as part of a study to collect pulmonary recordings for the development of a smart stethoscope. Patient and physician names have been changed.]
In the pediatric emergency department (ED) of Johns Hopkins Hospital, two patients share a single room, their beds arranged in parallel. Each bed is flanked by chairs for family members and an opaque blue curtain bisects the room, its spongy texture attracting the curious hands of more than one toddler on any given day. After spending several weeks recruiting patients for a study testing a new stethoscope, I began shadowing Dr. H, a young pediatrician in the the pediatric ED. I felt accustomed to the general comings and goings of the throngs of doctors, nurses, PAs, and patients streaming through the maze-like facility. Evenings were noisier than mornings, and now, at 5:30 pm on a Wednesday evening, even the fax machines and telephones were drowned out by the cries of children. As I followed Dr. H from room to room, I tried my best to keep out of the way while listening to and learning about each of her patients.
As Dr. H and I stood in front of Room 05, she articulated stories about the highest acuity, most contagious patients she had seen throughout her career in encyclopedic detail. Today, she was telling me about her preparations for coronavirus, should a patient arrive with the associated symptoms. Regimented, clear protocols laid out before the event — that’s how you prevent panic and infighting for resources, she said. Our conversations always pushed me to think about medicine from a broader perspective, and the energy and passion Dr. H brought to patient care were palpable. She squeezed one of the countless hand sanitizer dispensers lining the walls, rubbing her hands together almost unconsciously as we waited for the nurse to finish charting on the bedside laptop. Dr. H leaned against the wall. “It’s been a long week — but here I’m always learning, always teaching. In a way, it’s a privilege.” The nurse exited the room, signaling our cue to enter. Dr. H stood upright, squeezed the Purell dispenser again, rubbed her hands together, and turned the aluminum door handle to Room 05. Inside, an exhausted mother lay asleep, cradling her infant in white sheets on one side of the room. In the other bed, a toddler sat upright in her mother’s arms before tossing a stuffed dinosaur at my feet. I laughed and gingerly returned it to an empty chair.
“Maddie!” Dr. H smiled. “It’s been a while. You’ve gotten so big.” The mother flashed a brief smile, and they exchanged pleasantries for a few moments. “Can I listen to her heart and lungs?” Dr. H asked.
“Yes,” murmured the mother, positioning Maddie so she faced the physician. “She’s been doing good for a couple days. No more coughing.” Dr. H squeezed the toddler’s hand, and Maddie’s eyes flitted toward Dr. H, but only for a moment. The child’s face suddenly twisted and she began to wail. Dr. H cooed and soothed her within seconds, flashing a stethoscope retrofitted with a plastic clicking frog. Maddie stopped mid-wail, examining the frog intensely, as the doctor placed another stethoscope on her chest and back.
After a brief pause to auscultate, Dr. H issued an exciting assessment. “She sounds clear! I don’t hear any more crackles or wheezing, so this is a great sign.” The mother perked up; her shoulders visibly relaxing. “I’m always happy to see Maddie,” Dr. H continued as she leaned forward, gently taking one of the child’s fingers in her hand and waggling it, “But I’d be even happier to not see you in the hospital for a while. Now, go home and get some rest. Your nurse will be here soon to go over her paperwork.” The mother was ecstatic. She thanked Dr. H profusely, glancing up intermittently as her fingers rapidly tapped out a text on her phone, presumably to her spouse.
We exited the room, and I heard the heavy door slide closed behind Dr. H. I turned and was shocked to see a look of frustration on Dr. H’s face. “She shouldn’t even be here in the emergency room” she said. I nodded as she clarified: “Coming into the ED presents its own risks to the most vulnerable patients I see, but for many families it’s the only way they can get care. Maddie’s lungs sound great and she has a greater chance of catching something just by being here.” As we left the room, I wondered aloud: Why had Maddie’s mother brought her all the way to the ED when she appeared to be nearly recovered from her sickness? Perhaps Maddie had no pediatrician, or the ED was the closest source of medical care for her daughter in East Baltimore? There are no easy answers, said Dr. H, and I agreed. She reached for the hand sanitizer dispenser. Squeeze. We walked down the hall to the next room.