Does the Tesla Full Self Driving Approach Have a See-saw Problem?

[ad_1]

Tesla fans are now well aware of Tesla’s approach to “full self driving,” but I’ll provide a super quick summary here just to make sure all readers are on the same page. Basically, Tesla drivers in North America who bought the “Full Self Driving” package and passed a Safety Score test have a beta version of door-to-door Tesla Autopilot/Full Self Driving activated in their cars.

That means that, if I put a destination in my Tesla Model 3’s navigation as I’m leaving the driveway, my car will drive there on its own — in theory. It’s not close to perfect, and drivers must vigilantly monitor the car as it drives in order to intervene whenever necessary, but it now has broad capability to drive “anywhere.” When we drive around with Full Self Driving (FSD) on, if there’s a problem (either a disengagement or if the driver taps a little video icon to send a video clip of recent driving to Tesla HQ), members of the Tesla Autopilot team looks at the clip. If needed, they re-drive the scenario in a simulation program and respond to the issue in the correct way in order to teach the Tesla software how to handle that situation.

I got access to the Full Self Driving Beta several months ago (early October 2021). When I got it, I was quite surprised at how bad it was in my area. I was surprised since 1) I had seen a lot of hype about how good it was (including from Elon Musk and other people I generally trust when it comes to Tesla matters) and 2) I live in a really easy area for driving (a Florida suburb). When I started using FSD Beta, I was just not expecting to see that it had significant problems with basic driving tasks in a driving environment that’s about as easy as it gets. Nonetheless, I retained some hope that it would learn from its mistakes and from the feedback I was sending to Tesla HQ. Surely, it couldn’t be hard to correct some glaring problems and each update would be better and better.

I have seen some improvements since then. However, updates have also brought new problems! I didn’t expect that, at least not to the degree I’ve seen it. I’ve pondered over this for a while. Basically, I’ve been trying to understand reasons why Tesla FSD isn’t as good as I’d hoped it would be by now, and why it sometimes gets significantly worse. One potential issue is what I’m calling the “see-saw problem.” If my theory is correct to any notable degree, it could be a critical fault in Tesla’s approach to widespread, generalized self driving.

My concern is that as Tesla corrects flagged issues and uploads new software to Tesla customer cars, those corrections create issues elsewhere. In other words, they are just playing software see-saw. I’m not saying this is definitely happening, but if it is, then Tesla’s AI approach may not be adequate for this purpose without significant changes.

As I’ve been driving for months thinking about what the car sees and how the FSD software responds, I’ve come to appreciate that there’s much more nuance to driving than we typically realize. There are all kinds of little cues, differences in the roadway, differences in traffic flow and visibility, animal activity, and human behavior that we notice and then choose to either ignore or respond to — and sometimes we watch it closely for a bit while we decide between those two options because we know that small differences in the situation can change how we should respond. The things that make us react or not are wide ranging and can be really hard to put into boxes. Or, let’s put it another way: if you put something into a box (“act like this here”) based on how a person should respond in one drive, it’s inevitable that the rule used for that will not apply correctly in a similar but different scenario, and will lead to the car doing what it shouldn’t (e.g., reacting instead of ignoring).

Let me try to put this into more concrete, clearer terms. The most common route I drive is a 10-minute route from my home to my kids’ school. It’s a simple drive on mostly residential roads with wide lanes and moderate traffic. Back before I had FSD Beta, I could use Tesla Autopilot (adaptive cruise control, lane keeping, and automatic lane changes) on most of this route and it would flawlessly do its job. The only reason for not using it on almost the entire drive was the problem of potholes and some especially bumpy sections where you need to drive in an uncentered way in the lane in order to not make everyone’s teeth chatter (only a slight exaggeration). In fact, aside from those comfort & tire protection issues, the only reason it couldn’t drive the whole way is that it couldn’t make turns. When I passed the Safety Score test and got FSD Beta, that also meant dropping the use of radar and relying on “vision only.” The new and “improved” FSD software could hypothetically do the same task but could make those turns. However, FSD Beta using vision only (no radar) had issues — primarily, a lot of phantom braking. As a new version of FSD Beta would roll out and some Tesla enthusiasts would rave about how much better it was, I would eagerly upgrade and try it out. Sometimes it improved a bit. Other times it got much worse. Recently, it engaged in some crazy phantom swerving and more phantom braking, seemingly responding to different cues than it responded to in previous drives. This is the kind of thing that gave me the hunch that corrections for issues identified elsewhere by other Tesla FSD Beta users had led to overreactions in some of my driving scenarios.

\"\"

Tesla FSD on a residential road. © Zachary Shahan/CleanTechnica

In short, my hunch is that too generalized of a system — at least, one based on vision only — can’t respond appropriately to the many different scenarios drivers run across every day. And solving for each little trigger or false trigger in just the right way involves way too much nuance. Teaching software to brake for “ABCDEFGY” but not for “ABCDEFGH” is perhaps easy enough, but teaching it to respond correctly to 100,000 different nuanced variations of that is impractical and unrealistic.

Perhaps Tesla’s Full Self Driving can get to a level of acceptable safety with this approach nonetheless. (I’m skeptical at this point.) However, as several users have pointed out, the aim should be for the drives to be smooth and pleasant. With this approach, it’s hard to imagine that Tesla can cut the phantom braking and phantom swerving enough to make the riding experience “satisfactory.” If it can, I will be happily surprised and one of the first to celebrate it.

\"full

Tesla FSD visualization in a shopping center parking lot. © Zachary Shahan/CleanTechnica

I know this is a very simple analysis, and the “see-saw problem” is just a theory that is based on user experience and quite limited understanding of what Tesla’s AI team is doing, so I’m not at all saying that this is a certainty. However, it seems more logical to me at this point in time than assuming that Tesla is going to adequately teach the AI to drive well across the many slightly different environments and scenarios where it has FSD Beta deployed. If I am missing something or have a clearly faulty theory here, feel free to roast me down in the comments below.


 


Advertisement



 


\"\"


 

Appreciate CleanTechnica’s originality? Consider becoming a CleanTechnica Member, Supporter, Technician, or Ambassador — or a patron on Patreon.


 

Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.

\"\"



[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top