Friday, January 24, 2025
" "

Top 5 This Week

" "

Related Posts

How to Stay Human in an Artificial World

Getting your Trinity Audio player ready...

By Veronica Mackey

Artificial Intelligence (AI) is a powerful tool that can reduce your workload, put money in your pocket, inspire and make you believe anything is possible.  It can also cause you to make a foolish decision if you can’t tell the difference between the real and the fake.   Here is some food for thought on navigating AI in real life.

Telling the Real from the Fake

Here’s how you can spot the tell-tale signs of AI-generated content:

  1. Too perfect to be true
  2. Slang and idioms are “off”
  3. Content doesn’t feel humanly generated
  4. Surface level ideas
  5. Misinformation

Too Perfect

A dead giveaway is that written content feels too perfect.  The spelling and punctuation may be right, but it is not written like real people talk.

Slang and Idioms 

Aside from the obvious robotic voices that we hear on Google Maps and other technologies, AI lacks the human experience of knowing slang and idioms unique to certain regional and cultural groups.  

Lacks Human Touch

AI knows the words, but lacks the history and emotion behind them.  Human beings write from the heart, and express themselves through a variety of ways which are often unpredictable—which is why auto-text can really mess us up when trying to tell our unique stories.

Surface Level

AI is afraid to dive into controversy because it does not know how to respond appropriately.  It can generate good ideas, but not groundbreaking ones—at least for now.

Misinformation

This is probably the worst quality of all because it has the potential to do the most damage.  Case in point is a recent video on YouTube about Malia Obama being stopped and racially profiled by police.  It was presented in narrative form, rather than a news item, and recorded as two separate videos, with male and female voices.  The alleged story could only be found on YouTube, which made it impossible to verify.

Generally speaking, all A1-generated content should be reviewed, edited and enhanced by human creators.  Case in point is the embarrassing viral post of Donald Trump praying inside a church (first red flag!), praying with one too many fingers.  If the images you see are way over the top, if the people seem too flawless and attractive to be real, they probably aren’t.

Fake News

During Trump’s first presidency, “fake news” became a term to describe mainstream media, and basically any media outlet that did not report what Trump wanted to hear.  When CNN refuted his claim that President Obama had a larger crowd at his inauguration, showing photos of both events side by side, Trump accused the news channel of trickery.  Ironically, Vice President Kamala Harris was frequently accused of using AI to make crowds appear larger at her rallies.

Now, AI has become a way to discredit people often with little or no proof of wrongdoing.  As Ellen Judson, Senior Investigator at Global Witness, writes, “Don’t like the claims someone is making?  It’s far easier to claim that the evidence they have is AI-generated than to actually have to create evidence yourself.”

Whether you’re watching a YouTube video of an AI-generated Rihanna singing a gospel song, listening a voice-enhanced Denzel Washington giving relationship advice,  or admiring the breathtaking beauty of birds (with jeweled beaks, no less!) on Facebook, AI is for the most part auspicious entertainment.

But what happens when the technology is used for something more sinister?  

Protecting the Public

According to an article in Bloomberglaw.com (Oct. 10, 2024) titled “AI Needs regulatory Guardrails in the US to Ensure Safe Use” by Laura Kemp, “There is no federal law addressing AI safety issues. President Joe Biden’s executive order on AI safety serves as a policy directive, guiding federal agencies to develop AI safety standards and conduct testing. However, the order isn’t a law and doesn’t have direct legal binding force.”

California is among a handful of states that have enacted AI regulations, but they don’t go far enough to cover long-term protections, “especially now that California Governor Gavin Newsom has vetoed California’s AI safety bill,” Kempe wrote.

Kempe suggests regulatory agencies could start by:

  1. Setting federal minimum standards that align with global AI regulations that make compliance easier for companies.
  2. Designing and implementing emergency shutdown procedures for handling critical AI malfunctions.
  3. Documenting internal processes in a transparent and explainable manner to prepare for potential third-party audits.

The potential for AI to either enhance our lives or pose significant risks depends on how it is regulated. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles