Home

By Robert A. Vella

“Call me old, call me grumpy, but I don’t want no self-driving cars!”

Such sentiments are typical of elderly people who are stuck in their ways, resistant to change, and generally unhappy with themselves and the new world unfolding around them.  It could also signal a warning for profit-driven society unable to keep up with technological advances evolving at a breakneck pace.

In the case of self-driving cars, both interpretations would seem to apply.  Computerized automation is a fact of life in the modern world, and it is increasingly prevalent in transportation systems.  From maritime shipping to airline travel and high-speed rail systems, moving people and products from place to place and around the globe would be far more difficult without it.  Along with the efficiency provided by automation, safety is an added benefit.  Today’s aircraft technology, for example, has made piloting, navigation, and threat avoidance so much easier that some critics have blamed it for creating pilots who don’t know how to fly a plane anymore in the conventional sense.

However, there is a big difference between automation in air, ocean, and rail systems – which are heavily regulated and require intensive training and certification of its operators – and vehicular automation along the countless roads which crisscross much of the world’s land surfaces.  The more complex nature of road traffic particularly in cities, and the lower overall skill level of driving for personal use, poses greater technological problems for computer automation.  More uncertainty means more errors, and more errors mean more tragic malfunctions.

From:  Exclusive: Arizona governor and Uber kept self-driving program secret, emails reveal

Arizona’s Republican governor repeatedly encouraged Uber’s controversial experiment with autonomous cars in the state, enabling a secret testing program for self-driving vehicles with limited oversight from experts, according to hundreds of emails obtained by the Guardian.

The previously unseen emails between Uber and the office of governor Doug Ducey reveal how Uber began quietly testing self-driving cars in Phoenix in August 2016 without informing the public.

On Monday, 10 days after one of Uber’s self-driving vehicles killed a pedestrian in a Phoenix suburb, Ducey suspended the company’s right to operate autonomous cars on public roads in Arizona. It was a major about-face for the governor, who has spent years embracing the Silicon Valley startup.

From:  Tesla plunge: Fears of a cash crunch and production delays

It’s been a brutal month for Tesla: Analysts are worried about delays shipping its new Model 3, and say it will soon face a cash crunch. Plus there’s a government investigation into a recent fatal Model X crash.

The stock has now plunged 25 percent in March, following sharp declines this week.

[…]

Because the company is investing in expanding its production and car lineup, its costs are high and its revenue hasn’t yet caught up. The result: Tesla will burn through about $900 million of cash per quarter in the first half of 2018, according to UBS Securities analyst Colin Langan, who has a “sell” rating on the shares.

Even hitting its production goals won’t save the day, according to Langan, since each car will be a money-loser after accounting for spending on research and development and other costs.

Tesla is also under scrutiny after the fatal crash of a Model X in California on Friday. Tesla blamed a missing safety barrier for the severity of the crash. The National Transportation Safety Board said on Tuesday that it was opening an investigation into the crash, noting it was unclear if Tesla’s automated control system was active at the time.

Tesla pointed to a blog post about the crash, in which it said it’s working with authorities to recover the car’s logs.

“We have never seen this level of damage to a Model X in any other crash,” Tesla said in the blog post.

40 thoughts on “More self-driving car troubles

  1. Hello Robert. I don’t know much about the recent crash but read somewhere that the local police had said that the crash would have happened with or without the automated driving. But to the subject of automated self driving vehicles I am really excited about it. I think when it gets worked out it will be safer especially on highways. Mostly I like the idea because I live in an area without much mass transit. The population is majority elderly and while some still drive OK, the majority have trouble driving and handling the amount of traffic they face. They freeze at intersections and pulling into traffic,and just today I was following an older lady at 40 MPH in a 55 MPH zone. While I was not really too bothered I could see the line behind getting longer and the drivers were frustrated at the slow down. However the biggest problem for the elderly and disabled is they can not drive due to medications or disability and self driving cars would help greatly in their lives. Be well. Hugs

    Liked by 1 person

    • If and when vehicle automation technology is worked out, I would agree with you. But, it’s not there yet. In the meantime, I’m concerned that the profit motive is pushing it too fast. Our roads – full of marginal drivers like those you mentioned, passengers including children, bicycle and pedestrian traffic – shouldn’t be a testing ground for new technology.

      Liked by 1 person

      • In the searches I have done this morning, it seems the accidents would have happened with or without the automation. For example one involved not being able to stop in time after a pick up dropped something on to the road ahead. I would be interested in the data of automated accidents per total automated vehicles, compared to accidents by drivers to the total amount of regular vehicles on the road. Oh and good morning. Hugs

        Like

        • Scottie, you should do some more searching. Try “self-driving car accidents” and you might be shocked by the results. Here’s one incident that gets right to the heart of the matter, from CNN:

          The latest permutation on this theme occurred in May, when a self-driving Tesla-S failed to register the side of a white tractor-trailer truck against a pale sky. In its statement on the accident, Tesla is quick to remind us that the 40-year-old man killed in the crash was a technology consultant and autonomous vehicle enthusiast — as if a martyr for the greater cause of civic transportation.

          If anything, the cause of the crash can be chalked up to the incompatibility between humans and autonomous vehicles. Had the tractor-trailer also been driven by computer, it could have been on the same network as the Tesla. Like an air traffic control system, the network could have orchestrated the safe passage of both vehicles.

          The problems emerge when computerized vehicles don’t have such networking at their disposal. Instead, we’re asking the poor Tesla to drive using the same senses mere humans use – which is why the car missed the fact that its entire field of vision was occupied not by sky, but by truck. As autonomous vehicle proponents like to point out, these problems would be solved if robotic cars weren’t required to share the road with humans. We people are the problem,

          It’s an argument reminiscent of that made by early car manufacturers, who were being criticized for the high numbers of pedestrian injuries and fatalities on streets. The companies went on a massive public relations effort to shift the blame, and came up with the term “jay walker” to describe the country rube who didn’t know how to cross a street and was deserving of ridicule. Automobile clubs encouraged people to exterminate “the Jay Walker family” – and their little Walker children. Presumably, this was to be done through education, not running them over with cars.

          I agree with Jeff on the impetus behind self-driving cars. This is all about the commercial exploitation of human laziness and Millennials’ obsession with technology. In comparison, computer automation in other transportation systems such as maritime shipping, airlines, and rail, were primarily motivated by safety and efficiency.

          Liked by 1 person

        • They’re far from perfect at this point in time, but personally, I think self-driving cars (as they become more sophisticated) are most definitely a good thing. I do qualify my opinion by adding they should not be “mass-produced” until all the kinks have been worked out.

          However, having said this, it’s highly doubtful I’ll still be around to ever “drive” one. 🙂

          Liked by 2 people

        • Before we make too many self-driving cars, I think we need to focus on making Republicans with self-starting empathy. I know. I know. You think that’s impossible, and I’m asking too much of society, but I think, if all the scientists in the world put their minds to this, we can at least create a few of these Republicans. Would it be worth the financial investment? Well, I’m not sure. Republicans would undoubtedly say, “No.”

          Liked by 3 people

        • Hi Robert. I read two different stories on the crash between the Tesla and the Semi truck. The first one pointed out that fault was on both vehicles.

          The board unanimously voted to accept that the probable cause of the crash included a combination of factors. The first was the failure of the driver of the truck that the Tesla collided with to yield to the car, and second was the driver’s inattentiveness due to his overreliance on Autopilot

          The second made the point that the system did not fail but functioned as designed.

          In its final report, the agency said it “did not identify any defects in the design or performance” of Autopilot, or “any incidents in which the systems did not perform as designed.” The agency also noted that the frequency of crashes involving Tesla models declined by about 40 percent after the company introduced Autopilot.

          I guess I see more of the positives of the system even at this stage than others. We have automobiles now that can detect a sudden stop or impedance and bring the car to a fast safe stop even before the driver can react. Is there problems, sure. Are the problems part system, design, and programing, or human failures? I think both.

          I do not know what happened when autos first came out. Sad the fledgling industry took the tack they did in blaming people on the ground rather than the operators. However I do not see that as the same today. I also think this is motivated by safety. The 40% drop in crashes since the introduction of the system in Tesla’s shows it is a safety issue.
          Well I will keep watching the news and see what future developments bring. It maybe we are not ready for this technology. Or it maybe we need to educate people about what it really is for and how to adapt to it. Hugs

          Liked by 1 person

        • What are you quoting from? It sounds like a corporate responsibility avoidance opinion to me, just like negligent airlines who blame every accident on “human error.”

          The motive behind self-driving cars is safety? Bullshit.

          Liked by 1 person

        • It is an interesting subject. I enjoyed reading more on it. I look forward to what may come of all the ideas and tech. It may be good for us or it could be a very bad thing. With automation in other areas there could be more able bodied people to drive the handicapped and elderly where they need to go. Or our country may get on board with mass transit. I think either way it will cost money, and our states and counties and even the federal government hates to spend the needed money on infrastructure and transportation. Hugs

          Liked by 1 person

        • From the NYT article:

          The regulators warned, however, that advanced driver-assistance systems like the one in Tesla’s cars could be relied on to react properly in only some situations that arise on roadways. And the officials said that all automakers needed to be clear about how the systems should be used. Almost all major automakers are pursuing similar technology.

          “Not all systems can do all things,” said Bryan Thomas, a spokesman for the National Highway Traffic Safety Administration, the agency that investigated the car involved in the May accident. “There are driving scenarios that automatic emergency braking systems are not designed to address.”

          Tesla’s self-driving software, known as Autopilot, has proved adept at preventing Tesla cars from rear-ending other vehicles, but situations involving crossing traffic — as was the case in the crash that regulators investigated — “are beyond the performance capabilities of the system,” Mr. Thomas said.

          “Autopilot requires full driver engagement at all times,” he said.

          Liked by 1 person

        • Yes. The systems are not fully ready for the sci-fi sit back and read the paper while the AI drives for you. I never argued they were. I believe that in the crashes I read about the systems were not the major fault. They are not perfect, but I think they are the way of the future. Plus there is a lot more work on AI systems. I also understand humans have faults that computer systems can be designed to overcome or eliminate. No more drivers being hungover, on medications or drugs, too tired to function to name a few things I can think of. I remember when office robots were the big deal and they had to put lines on the floor for the roving bots to follow. We have come a long way, and we will go father in the future. The days of George Jetson is not here yet, but if we don’t blow ourselves up, destroy the habitat we need to live, or take ourselves back to the stone age it will come one day. Hugs

          Liked by 2 people

        • Scottie, I wrote AI software as experimental projects during my long career as a computer programmer/consultant. Although the technology has obviously progressed since then, I can tell you with great certainty that self-awareness, moral judgements, and intuition are far beyond the capabilities of modern digital computer systems. Therefore, it is incorrect to assume that computers will achieve anything resembling human perception and cognition within the foreseeable future. For that to happen, some new technological breakthrough such as quantum computers would be necessary.

          Liked by 1 person

        • Hello Robert. I do not agree. I do not doubt your qualifications, but I also do not discount human invention and drive. Look ten years ago I wouldn’t have thought a huge robot could be made to do back flips, but now they can. Totally untethered. I remember when people said they never would be able to do stairs. Now that is a simple task for them. It comes down to hardware and software advancements. Also I think we are really talking different things. You mention moral judgements, self awareness, intuition. I am not talking a Cortana from Halo or Rommie from the Andromeda Show. I am talking systems that take the failures of human error out of a known programmable system. It will not be perfect as long as humans can keep putting in the errors. I am talking things like on our last van. It has a braking system that if you jam on the brakes hard , it takes over the entire braking system. It takes control away from the driver and does the emergency braking on its own in a much better way. It worked and the one time I needed to use it saved Ron and I. I was driving at about 60 and a car pulled out into the road way. I hit the brakes and the horn. The car driver panicked and instead of trying to move forward more quickly stopped. I should have lost control of the van with the brakes locked up. Instead I still had steering and with the system decelerating I was able to steer off the road and not hit the stupid jerk who pulled out.
          The system worked. Now in the new model we are looking at to replace the old one, they have a system that looks ahead and reacts to stop the van without input from the driver. I don’t pretend to understand how it works other than radar or something, but the computer is taking in all that input and making decisions to act without the driver telling it to. We are looking at the Kia sportage also. The new hydraulic all wheel drive has a computer system with sensors that sample the road and tire speeds at an incredible rate and adjust hydraulic motors to each of the back wheels to give the best feel for what driving mode you are in, what best for traction, and also what is best for wear on the tires. That is a huge advancement. The Toyota Tacoma off road automatic transmission has a system that takes over the engine and wheels / drive when you push a button that “crawls it self out of being buried”. They show it on the Toyota videos. It does it all by it self and it really does dig it self out.
          I was listening to VOX on YouTube. They have a video called “AI, are we creating god”? They seem to disagree a bit with your assessment. Again I do not know what we as an intelligent species can / will do, but I wouldn’t bet against great leaps in to things we at this point say we can’t do. Hugs

          Like

        • You are discrediting my professional expertise and equating your unqualified opinion to it. But, I don’t really care what you believe. A VOX video on YouTube? Seriously? Do you have any understanding whatsoever of knowledge-based systems and inference engines? I doubt it.

          When computer programmers get together and talk about AI privately, they laugh. It’s a sensationalized term for public consumption. AI software is extremely sophisticated, but it is not “intelligence” by any stretch of the imagination. But, just ignore all this and keep believing whatever you want. That’s what most people do to reinforce their preconceptions, and that’s why we have nonsensical belief systems like religion. Let’s all pray to the almighty AI god!

          Liked by 2 people

        • Robert I am sorry you feel that way as I was not discreting you. I also do recall claiming an expertize byond ling long enough to see changes many claimed wouldn’t ever happen. Vox is an information channel not a random person channel. I think you need to read what I wrote again. I did not say the system had its own intelligence beyond what parameters are written into the system. Again this is what happens every time we have a conversation. IF a person disagrees with you slightly, which in this case I have done, you lash out with anger. OK, you read things into what I have said that I never advocated. Fine I am done. You have a great time and a wonderful life. I prefer to have conversations with people who can agree to talk without getting upset. Bye. Hugs

          Like

        • Sorry that was not typo cleared before I sent. ” do recall claiming an expertize byond ling long enough to see ” should have been typed : I do not recall claiming an expertize beyond living long enough to see” . Sorry for the typos. Hugs

          Like

        • Robert — I don’t see Scottie discrediting your intelligence, training, or expertise. Just because he feels that technological progress will lead to things we are simply unable to see right now does not mean your perspective is wrong.

          There are advancements taking place EVERYDAY that we would not have dreamed about 10 years ago. Whether the “automatic car” will ever come to pass … maybe, maybe not.

          But unless you can see the future, wouldn’t it be best to discuss the matter with an open mind?

          Liked by 2 people

        • Your debate observation is noted. However, just as you are free to moderate your blog as you see fit, so am I free to moderate my blog as I see fit. Although I am very tolerant of differing perspectives, there are two things I am not tolerant of: 1) comments proclaiming strong opinions without sufficient or convincing evidence, and 2) commentators who purposefully ignore contrary evidence just to reinforce their point of view.

          Scottie’s opinion on AI is not at all equatable to mine. I studied computer science in college, worked in the tech industry for a long time, and was very successful in my professional career. Apple Computer was one of my clients. I’ve written hundreds of thousands if not millions of lines of code including AI. I’ve instructed other programmers. Scottie has done nothing of the kind as far as I know. What I’ve stated in this discussion is true. You and he can refuse to accept it if you will, but don’t try to pass off lay opinions as fact as he did. I have no patience for endless he-said, she-said debates on this blog. Your perspective was welcome, his crossed the line.

          Furthermore, there is a history between Scottie and myself which caused us to stop following each other’s blogs. Only recently did he re-follow mine and I reciprocated. I’ve been polite and respectful on his blog, while he has been frequently confrontational on mine without even having read the content of my posts beforehand. Our problems started during the 2016 election cycle because he supported Hillary and I supported Bernie Sanders. This rift never healed, and got much worse after Trump won because of editorials I posted which criticized the Clinton campaign and establishment Democrats. Scottie was openly angered by these editorials, and it appears those feelings still linger.

          Liked by 1 person

  2. I am conflicted about automated cars. Maybe when I am old and I don’t enjoy driving anymore, then I may have a change of mind.
    I recently read about a crash of such a car that had a passenger in it but was helpless. Couldn’t do a thing.

    Liked by 2 people

      • From that article:

        The death of a pedestrian during a test drive of a driverless vehicle (even as a backup human sat in the driver’s seat) calls into question not just the technology—which didn’t seem to detect the pedestrian crossing a busy roadway and therefore didn’t brake or swerve—but also the notion that driving is nothing more than a set of instructions that can be carried out by a machine.

        The surprised backup driver seemed to have confidence in the inventors of driverless cars as he was looking down at his computer briefly just before impact.

        Certainly, a real human driver might have hit this pedestrian who was crossing a busy street at night with her bicycle. But, of course, as a friend of mine pointed out, there is a big difference in the public mind between a human driver hitting and killing a pedestrian and a robot killing one. If the incident had involved a human driver in a regular car, it would probably only have been reported locally.

        But the real story is “robot kills human.” Even worse, it happened as a seemingly helpless human backup driver looked on. The optics are the absolute worst imaginable for the driverless car industry.

        It makes sense to me that a world of exclusively driverless cars with a limited but known repertoire of moves might indeed be safer than our current world of human drivers. But trying to anticipate all the permutations of human behavior in the control systems for driverless car systems seems like a fool’s errand. I’m skeptical that the broad public will readily accept a mixed human/robot system of drivers on the roads. You can be courteous or rude to other drivers on the road or to pedestrians on the curb. But how can you make your intentions known to a robot? How could a pedestrian communicate with a robot car in the way that approaches the simplicity of a nod or a wave to acknowledge the courteous offer from a driver to let the pedestrian cross the street?

        Like

    • Yes, there are many cases of such accidents. Being a helpless passenger for no other reason than pure laziness, is a risk I would never take. I applaud your conflict over automated cars.

      Like

  3. I want an automated bed–one that bathes me, puts my pajamas on me, puts me in into it, and then sings me dulcet lullabies until I nod off. Until such a time, all I can say is, “Do it urself, ya’ lazy idjit!”

    Liked by 3 people

Comments are closed.