I was able to access the FSD Beta several months ago (early October 2021). When I got it, I was completely surprised at how bad the situation in my area was. I’ve been surprised since 1) I’ve seen a lot of hype about how good this is (including from Elon Musk and other people I generally trust when it comes to Tesla things) and 2) I live in a really easy-to-drive area (a Florida suburb). When I first started using the FSD Beta, I wasn’t expecting to see it have as big of problems with basic driving tasks in an easy driving environment as it gets. However, I kept some hope that she’d learn from her mistakes and from the comments I’d been sending to Tesla HQ. Certainly, some glaring problems will not be difficult to rectify and each update will be better and better.
I’ve seen some improvements since then. However, updates have also brought new problems! I wasn’t expecting it, at least not to the degree I’ve seen. I’ve thought about this for a while. Basically, I was trying to understand the reasons why the Tesla FSD is not as good as I wish it was now, and why it gets worse sometimes. One potential problem is what I call the “swing problem”. If my theory is correct to any appreciable degree, it could be a fatal flaw in Tesla’s widely held, generalized approach to self-driving.
My concern is that while Tesla patches reported problems and uploads new software to Tesla customers’ cars, these patches create problems elsewhere. In other words, they are just playing a software swing. I’m not saying that’s definitely happening, but if it is, Tesla’s approach to AI may not be suitable for this purpose without major changes.
Since I’ve been driving for months thinking about what the car sees and how the FSD program responds, I’ve come to appreciate it Much more A subtle difference in leadership than we usually realize. There are all kinds of little cues, differences in route, differences in traffic flow and visibility, animal activity, and human behavior that we observe and then choose to either ignore or respond to — sometimes we watch closely for a while while we do it. We decide between these two options because we know that small differences in situation can change the way we should respond. The things that make us interact or not are so extensive and it can be really hard to put them in the boxes. Or, let’s put it another way: if you put something in a box (“act like this here”) based on how a person responds in one drive, it is inevitable that the rule used for it will not apply correctly in the same thing but a different scenario , and will cause the car to do what it shouldn’t (eg respond instead of ignore).
Let me try to put this in more realistic and explained terms. The most common route I drive is a 10-minute drive from my home to my children’s school. It’s simple driving on mostly residential roads with wide lanes and moderate traffic. Back before I had the FSD Beta, I could use a Tesla Autopilot (adaptive cruise control, keep lane keep, and auto lane change) on most of this route and it would do its job flawlessly. The only reason I didn’t use it on almost the entire drive was the pothole issue and some particularly bumpy sections where you need to drive off-centric in the driveway so you don’t make everyone’s teeth chatter (just a slight exaggeration). In fact, aside from comfort and tire protection issues, the only reason it can’t drive fully is because it can’t turn. When I passed the safety grade test and got the FSD Beta, that also meant giving up using radar and relying on “sight only.” The new and improved FSD software is supposed to do the same job but can do these detours. However, the vision-only (no-radar) FSD Beta had problems – primarily, a lot of phantom braking. Since a new version of the FSD Beta is coming out and some Tesla fans will be interested in how much it gets better, I’ll eagerly upgrade it and give it a try. Sometimes it gets a little better. Other times it got much worse. Lately, she’s been engaging in some crazy phantom distractions and more phantom braking, and she seems to be responding to different cues than she’d responded to in previous drives. This is the kind of thing that gave me intuition that patches for issues identified elsewhere by other Tesla FSD Beta users led to overreactions in some of my driving scenarios.
In short, my hunch is that a system that is too generalized – at least, one based solely on vision – can’t respond adequately to the many different scenarios drivers go through every day. And solving each small operator or wrong operator in the right way involves a lot of nuances. Teaching the software to brake for “ABCDEFGY” but not for “ABCDEFGH” might be easy enough, but teaching it to respond correctly to 100,000 subtle variations of it is impractical and unrealistic.
Perhaps the Tesla FSD can reach an acceptable level of security with this approach though. (I’m skeptical at this point.) However, as several users have pointed out, the goal should be for the drives to be smooth And Attractive. With this approach, it’s hard to imagine that Tesla could cut phantom braking and phantom drift enough to make the riding experience “satisfactory.” If possible, I would be happily surprised as I am one of the first to celebrate it.
I know this is a very simple analysis, and the “swing problem” is just a theory based on user experience and a very limited understanding of what the Tesla AI team is doing, so I’m not at all saying that this is a certainty. However, it makes more sense to me at this point than assuming that Tesla will adequately teach the AI to drive well through the many slightly different environments and scenarios where the FSD Beta has been deployed. If I’m missing something or have a clearly wrong theory here, feel free to toast me in the comments below.
Do we appreciate the originality of CleanTechnica? Consider becoming a CleanTechnica Member, Supporter, Technician, Ambassador – or Patreon Sponsor.