Artificial Intelligence

Artificial Intelligence or AI is here today.  No, it is not walking, talking robots that behave like us, but there has been a very significant advancement in the field since 2017 that affects us today. In 2017 a new AI engine was developed that was basically a language model, and since then the model has been improved by a variety of companies in a race to be there first and gain the advantage.  There are several positive things about this new AI, but also some very serious negatives.

This Generative Large Language Multi-Modal Model treats everything as a language.  So once the model is trained, you can translate from one “language” to another.  For example, you can type “astronaut riding a horse” and you get a short video of an astronaut riding a horse.  And an AI generated piece of art won first place at the Colorado State Fair.  But it goes far beyond that, right now if you show a picture to a person and from their brain scan AI can roughly reproduce that picture.  If you talk for three seconds to one AI program, the AI program can simulate you talking and saying whatever input it is given. You can not tell the difference between the AI voice and your own.

All this sounds cool with lots of potential for good things to come out of it (e.g. art, movie cgi, and automatic generation of reports, charts, and press releases) but there are some very serious downsides too. In the last couple of years deepfakes have become more popular, but now it has become more than replacing the head of someone with someone else’s head.  You can now create realistic fakes. Here is a possible scenario, you run across a video of a politician spouting something totally offensive.  Did they really say that or did someone else create a fake video?  Or you are remotely interviewing a person on video chat. Is that a real person or is it an AI program answering your questions? Did the student write the paper or did an AI program write it for him?  Content verification has become a problem.

This is great for scammers. In fact, a recent major break-in has experts worried that the purpose of the break-in is to use the personal information obtained to create deepfake identity scams to get into bank accounts, open new credit cards, and receive government benefits (e.g. social security, unemployment benefits).  If the person is dead then it can work without anyone noticing. And at a more personal level, what would you do if you received a phone call saying that your son was wrongly beaten and in jail, and he needed $10,000 right away to make bail. You would be suspicious but it sounded exactly like your son and pictures that were sent show him to be in jail and beaten.  The evidence points to him being in jail, but is it a deepfake scam to get your money?  Who and what can you trust?

Right now there are several AI chat programs out there for you to use.  Companies have rushed them to the market after ChatGPT was released.  They do not always give accurate answers and can make up stuff, as two lawyers found out the hard way.  Microsoft has quickly baked AI into its search engine, Bing, and Snapchat has prominently displayed its AI chat program for you to use. A researcher pretended to be a girl, who for her 13th birthday was going to romantically meet for the first time with a man 18 years her senior for sex. Snapchat’s AI  never once gave any warning that this was a really terrible idea, with only briefly saying to be safe and cautious, instead it supported her decision and helped her make plans.  That may be enough for the lawyers, but it is not enough of a warning for a 13 year old girl.  These programs need to do much better at giving sound advice and the AI program needs to come with prominent warning labels.

Companies have rushed these chat products to market, because they have learned how addictive social media is. (Social media uses AI to curate posts and news for you).  They want to be the company to capture your attention and feelings, so they can end up with the dominant market share.  The problem is their main priority is not to have a safe informative chat program, rather to capture you for advertising dollars. They are being too reckless in their pursuit of the almighty dollar.  That is why we can not trust the results or advice we get from these AI chat products today. They are not safe.

We can not go back and undo the harm that social media brought, but with AI programs maybe we can slow down AI development by forcing companies to be more responsible for the results and consequences they produce. We need to make them think beyond the profit margin.

This post was inspired by a presentation, “The A.I. Dilemma”,  by Tristan Harris and Aza Raskin.

P.S. People call this Artificial Intelligence or AI but it is my feeling that these programs are trying to mimic intelligent behavior but are not really intelligent in the way we normally think of intelligence.  Intelligence is more than knowledge.

3 thoughts on “Artificial Intelligence

  1. Would love to know more about what you think about intelligence being more than knowledge. Thanks for the insight, and a great read!

Leave a reply to Emma Cancel reply