AI, ML, Algorithms, and Conciousness

We own a Tesla and one of the most frustrating yet insightful pieces of technology of this car is – wait for it – the windshield wipers. From other cars, we are used to intermittent wipers, which is a whole story in itself, but the Tesla has a windshield wiper mode called “Automatic”, which in this case means, the decision as to when to wipe the windshield is in the hands (so to speak) of an AI – an Artificial Intelligence, which really means a Machine Learning system (ML).

What would an algorithm for a windshield wiper that got its input from a camera look like? Naturally we would first consider the question, “What determines when a driver thinks it is time to wipe the windshield?” and that’s presumably when their “threshold of fuzziness” has been exceeded. Different drivers might have different thresholds. Some like it cleared as soon as a few drops land, some are content to wait until it’s hard to read the license plate on the car ahead, or other such heuristics.

Now I have worked on many computer applications, and developed many algorithms. An algorithm is a set of computer instructions that deliver a result that varies with the input provided. The Tesla doesn’t have any in-windshied sensors so it relies on the front camera just in front of the rear-view mirror for its input which looks forward through the windshield.

Without going too deep into math, some kind of calculation that detects how fuzzy an image is would be key to an auto-wiper algorithm. (There are plenty of choices for this detector, probably having to do with correlation, autocorrelation and frequency analysis using FFT). Most modern mirrorless DSLR cameras have fast auto-focus systems that solve a very similar problem of trying to determine when a picture is as sharp as possible. Instead of adjusting the focus ring, this kind of algorithm in the Tesla would just turn on the windshield wipers when the picture gets too “fuzzy”.

If we developed this algorithm we might give a control the driver labeled something like “sensitivity” that would adjust how early or late the algorithm would wait until it started the wipers, and that would also serve as a way to detect when the windshield is clear enough that the wipers could be stopped. If the driver didn’t like the current setting they could adjust for their own “threshold of fuzziness”

Sadly the Tesla wipers appear to not be an algorithm, and definitely don’t have a Sensitivity adjustment. They seem to work on an ML, a machine learning system. They do very weird things. There are times when I can barely see out the windshield and am silently begging the auto-wipers to come on (yes, I could set them to manually wipe but then what would this blog be about?). Then there are the mystery panics. Once in a while, the wiper ML has a small panic attack, frantically wiping the windshield at high speed, unable to stop itself even long after the view is clear. Everyone in the car asks, “Why is it doing that?” and the answer is the key to this entire debate, The answer is “No one knows”

In an ML system, no one really knows why it does what it does. It can’t explain it to you, not only because it hasn’t been given the power of speech, but because there is no one “there” to ask. An ML is a statistics engine that makes decisions based on weighting all its inputs. You can look at the weighting values of every element in an ML but it won’t tell you anything, literally or figuratively.

Most ML systems are based on neural networks, which are designed to model how the brain works. Each of our neurons has a threshold that fires when that threshold has been met, and neurons are connected together in a way that form complex networks from the time we are born as we start to sense and react to our environment. But if you open someone’s brain and try to see why they prefer vanilla over chocolate, all you can find is trillions of neurons firing. Watching neurons won’t explain how people or things learn, you have to look at the patterns the neurons form. Cognition is in the pattern of neurons, but that still can’t tell you why an ML (or a brain) made a certain decision. For that, you need metacognition, the ability to recognize patterns, remember and recall them, give those patterns a name, and be able to express them, to share them with another conscious entity, so both better understand how they think.

When raising a child, parents spend a lot of time helping children learn words, and use them. They show their kids what these words mean; concrete words like ball, Cheerio, and tree, as well as abstract ideas like playing nicely, feeling hungry, or needing a bathroom. Parents teach their kids cognition and metacognition. and help them grow, survive, and learn in the real and complex world.

Current AI is misnamed., They don’t seem very intelligent to me, just clever. They are cognitive but without metacognition. They don’t understand what they are, or what they are for, or why they decide as they do. I for one am thankful that our ML systems are not conscious, since we treat them very poorly. Training an ML is like putting a child into a dark, silent closet and feeding them information they don’t comprehend, asking for an undefined result, then giving them shocks until they answer in the way you want them to. This technique of training ML systems is called back-propagation and is probably harmless to electronic neural nets. I just worry that if consciousness were to emerge from a sufficiently complex ML, it is going to be pissed for how we treated it.

I tend to prefer algorithms over ML, because then humans have strived to understand the problem, and enjoy the satisfaction of solving it, usually in a way that makes the lives of many other humans easier. If our Automatic wipers had an algorithm and I could adjust my Threshold of Fuzziness dial, I would feel some agency in being able to tune my environment for my own perception of comfort and safety.

ML system are capable of learning over time, but I don’t see any evidence that the Tesla wiper ML considers my input. No matter how many times I have mashed the “wipe now” button it doesn’t learn my preferences and adapt. Tesla collects millions of videos of cars driving and wipers wiping and has updated the wiper ML in our car several times as part of software updates but it remains to my experience almost unchanged. Perhaps when you train systems from millions of users, you get something that doesn’t please anyone.

While I would like our own car’s wiper ML to be more adaptive to my sense of safety and comfort, I dread making something so smart that it becomes conscious, like the Sirius Cybernetics Corporation doors in the Hitchhiker’s Guide to the Galaxy which attain consciousness but are relegated to a life where they are incapable of doing anything other than open a door when asked.

I never worry about that when it’s just an algorithm and perhaps that is the big difference between ML and algorithms. In an auto-focus algorithm there would be variables that we can expose to user control (like a knob or dial) that lets them select how fuzzy a view they can tolerate before the wipers should be activated. In an ML there’s no such single variable – the cognition is distributed among hundreds or perhaps thousands of digital neurons. Feedback can adjust an ML and change its behavior but just like trying to change a youngster it can take time, and patience. I wouldn’t mind training our Tesla’s wiper ML, I certainly have given it lots of input over the years, but it just doesn’t seem to listen to what I say…

The Power of Story in Agile Development

Stories have been part of our human experience since the discovery of fire. We use stories to teach, entertain, convince, empathize and inspire. As engineers and software developers we can greatly improve our products and customer experience by incorporating stories – of our customers, and of ourselves – into our Agile software development process.

Here are the slides for the Agile/Product Management Meetup at Hootsuite  Thursday Jan 28.