The Inappropriate and Problematic Uses of Technology

This posting is directed mostly at AI or Artificial Intelligence, but the AI acronym may more appropriately stand for Artificial Ineptness or Artificial Ignorance. Although I am going to outline briefly a few of the big issues, I do think AI could be used for some benefit in a rather small set of contexts. But these uses are as tools, not as a decision-makers or as some similar holders of power and control.

An example I use as a simplistic demonstration of the limits of AI is one that involves the car I drive. It’s a 2017 base-model of the Toyota Corolla. It does not have a remote door opener. But, it does have some AI systems, such as an alarm for getting too close to an object in front of you, and it will apply the breaks, if it (AI) thinks you’re actually going to have a collision. Another AI system is the camera system for keeping you in the lane. The third system, which is related to the collision avoidance system involves radar that is used to maintain a safe distance from the car in front of you while you have cruise control engaged. And, the fourth system uses the camera and radar to lower the bright lights when a car approaches and to turn them on when there are no cars in sight.

I do like all four of these, but I don’t really need them, and I do not trust them. The lane control system often confuses tar repair lines in the middle of a lane for the “real” lane markers, so you have to muscularly override the car’s attempt to move you back into the lane, which is actually moving you out of the lane. The collision avoidance system starts blaring if you’re rounding a curve with concrete construction barriers along the side of the road. Even though you are moving smoothly around the curve, the AI system thinks you’re going to collide with these barriers. The high beam control is never more than 60% accurate. It’ll think house lights are car lights and it’ll miss many oncoming cars. And, it almost never turns off high beams, when there are cars in front of you going in the same direction. As with any technology, the information we get from the technology needs to be confirmed by us, real, thinking, human beings.

The next example is a bit more bothersome and important. I just got a notice from my home and car insurance company saying they were dropping the insurance for the home that is co-owned with one of our sons. His name is on the policy; he pays for that part of the bill; and he lives in the house. The reason for the notice was that we, the owners, were not living in the house. The reason this happened was, you may have guessed, the AI system the insurance company uses, checks for inconsistencies in policies, like home owners’ policies for home in which the owners don’t live. Such a task is far too time-consuming to be carried out by employees, so an AI system is great to have. But, when the AI system is the decision-maker, notice letter writer and mailer, there are huge problems. A human is more likely to catch such seeming discrepancies. The appropriate way to use AI is to have them send alerts to the humans who can check out and verify the situation. As it stands, AI decisions can drive customers away. I came within a few seconds of cancelling all of our policies with this company and going to another company. Pissed off customers are not good for business. Such situations are easy enough to avoid, if the corporate higher-ups cared enough.

Such situations get even more critical and more dangerous when we have AI driving cars, making medical decisions, and other life-on-the-line contexts. AI can help go through huge amounts of data and information, and alerting people to specific areas of concern. But, it should be up to the people to make sense of the information and formulate plans of action, such as a treatment plan or surgery.

The other pet-peeve area of concern I have is having AI write. Writing with a pen and paper, and writing on a computer, which is a bit behind handwriting, is a powerful means for deeper and more complex learning. As far back as I can remember, I always hated studying for tests, after which I never felt like I learned very much. But, when I had to write a paper, I felt like I had really learned a lot. And, that is still true today. I love to read and to write; and they usually go together. But, just having a computer write a paper for you is a complete waste and a huge act of aggression towards oneself. And, the same holds true of scientists and academics of all sorts, who use AI to write that paper that needs to get published. It’s just another sham and a terrible disservice to oneself and to ones’ readers.

“Knowledge and Thought Have Parted Company”

“If it should turn out to be true that knowledge… and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.”

— Hannah Arendt (1958). The Human Condition (p. 3)

Knowledge and thought are parting company due to the politics that has perverted our educational system under the guise of “raising standards” and “teacher accountability.”