It’s time we discussed AI

0

Editor’s note — This is the second part of a two-part column.

Let’s continue our foray into LLMs (Large Language Models). But before we do, I should mention that the AI landscape is changing rapidly. Pretty much every week there’s an announcement from one of the major players about changes or new facilities available in their LLM. For instance, last week Google announced that Bard (their conversational LLM that we discussed last time) can now generate images (see https://go.ttot.link/BardImages).

You’ll also see references to computer chips. Sorting through all that data quickly so you can get an answer in a few seconds takes a lot of compute power. And the type of computer power isn’t always the traditional type of compute power. The type of computing done by graphics processors is what’s often needed and the largest graphics processor maker is NVidia (pronounced “en-vidia”) which is why you’ll often see them mentioned in an article about LLMs. And any generally available LLM will use many of those compute engines, from hundreds to thousands of them. Yes, it takes a lot of electricity to power them and it costs a lot of money to buy and power them. The major players are looking to create their own chips because of the cost and to reduce their dependence on things produced by another company.

So, how do you get the most out of your chat with an AI? The question or task you give to an AI is called a “prompt” and, if you go looking, you’ll find many resources that offer help in how you craft a prompt to get the best results. This is often called “prompt engineering” and is really quite helpful. The way to chat with an AI is not the same way you chat with a person. Some good resources to learn from are https://go.ttot.link/Prompts-1 and https://go.ttot.link/Prompts-2. Because I have limited space I won’t go into it here. The references I’ve cited are good sources.

Until now I’ve concentrated on LLMs that generate text but, as the article I cited at the beginning of today’s article indicates, they can also generate images. Some popular image generation sites are DALL-E 2 from OpenAI (https://go.ttot.link/DALL-E-2), Midjourney (https://go.ttot.link/MidJourney), and Night Cafe (https://go.ttot.link/NightCafe). All require you to create an account and will give you free credits to generate some images. They also provide some assistance in creating a prompt to generate the best possible image.

In fact, LLMs can generate many sorts of things from pictures to videos; from music to spoken words; from text to full-blown Powerpoint presentations. You can even use them to generate “deep fake” pictures, voices and videos. They’re called “deep fake” because they are very difficult to know whether they are the real thing or not. Deep fakes can be fun but they can also be very misleading and if the deep fake is about a news story or a candidate for political office it can be very damaging. The TV show “60 Minutes” dedicated a segment recently to deep fakes — https://go.ttot.link/DeepFakes. Deep fakes along with other nefarious uses of LLMs is causing lawmakers and ethicists to discuss how to deal with them. It’s still early days, though, and no real solution has presented itself. In the past I’ve cautioned us all to be skeptical of anything we see or hear that doesn’t sound right or conflicts with what we know to be true. It’s even more important now. How do we deal with it? Check with other, reliable sources. If you see a post on a social media site that is contrary to what you know or seems a bit outrageous and is said to come from a specific news site, go to that site and check for that story.

For all that, though, LLMs can help us in many ways in our daily lives and are likely already doing just that, maybe without our knowledge. Heck, Walmart/Sam’s Club is using LLMs to help keep the right products in stock.

That’s all for this week’s column. I hope this helps you understand Large Language Models (LLMs or generative AI) and gives you some ideas about how to use it in your everyday life. Don’t hesitate to write to me if you have questions!

As always, my intent with these columns is to spark your curiosity, give you enough information to get started, and arm you with the necessary keywords (or buzzwords) so you’ll understand the basics and are equipped to search for more detailed information.

Please feel free to email me with questions, comments, suggestions, requests for future columns, to sign up for my newsletter, or whatever at [email protected] or just drop me a quick note and say HI!

You’ve got choices as to how you read my columns. You can read all my columns and sign up for my newsletter to have them delivered to your email when I publish them at https://go.ttot.link/TFTNT-Newsletter. You can read the original columns in the Hillsboro Times Gazette at https://go.ttot.link/TGColumns+Links or https://go.ttot.link/TGC+L. That page contains a link to all of my newspaper columns along with live, clickable links for each site referenced in the column. It should be updated shortly after this column appears in the online version of the newspaper.

Tony Sumrall, a Hillsboro native whose parents ran the former Highland Lanes bowling alley, is a maker with both leadership and technical skills. He’s been in the computing arena since his graduation from Miami University with a bachelor’s degree in systems analysis, working for and with companies ranging in size from five to hundreds of thousands of employees. He holds five patents and lives and thrives in Silicon Valley which feeds his love for all things tech.

No posts to display