
Daniel 11:30
(KJV)For the ships of Chittim shall come against him: therefore he shall be grieved, and return, and have indignation against the holy covenant: so shall he do; he shall even return, and have intelligence (H995) with them that forsake the holy covenant.
*H995 – בּין- -bı̂yn – bene – A primitive root; to separate mentally (or distinguish), that is, (generally) understand. KJV Usage: attend, consider, be cunning, diligently, direct, discern, eloquent, feel, inform, instruct, have intelligence, know, look well to, mark, perceive, be prudent, regard, (can) skill (-ful), teach, think, (cause, make to, get, give, have) understand (-ing), view, (deal) wise (-ly, man).
The Chips That Cheapen
The Full article:
Artificial intelligence has been advancing at light speed in recent weeks and months, to the point that some of the technology’s own top experts are insisting on a halt – because of the threat it poses to mankind.
It’s been used, controversially, by tech companies to suppress reasonable questions and concerns about COVID treatments, election security and more, and there even have been schemes in the Biden administration to use it to suppress opinions that differ from the administration’s.
Now there’s new confirmation that AI has moved beyond any boundaries of control.
It comes from the experience of constitutional expert, George Washington University law professor and frequent witness to Congress on legal issues Jonathan Turley.
At his website, he has a column called, “Defamed by ChatGPT: My own bizarre experience with artificiality of ‘artificial intelligence.'”
There, he explains, there was a false report about him that surfaced, via AI, to a query about sexual harassment.
“Yesterday, President Joe Biden declared that ‘it remains to be seen’ whether Artificial Intelligence (AI) is ‘dangerous.’ I would beg to differ. I have been writing about the threat of AI to free speech. Then recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught.”
He pointed out the AI, ChapGPT, “relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper. When the Washington Post investigated the false story, it learned that another AI program ‘Microsoft’s Bing, which is powered by GPT-4, repeated the false claim about Turley.'”
“It appears that I have now been adjudicated by an AI jury on something that never occurred,” Turley warned.
He explained, too, the total rejection of any responsibility.
“When contacted by the Post, ‘Katy Asher, Senior Communications Director at Microsoft, said the company is taking steps to ensure search results are safe and accurate.’ That is it and that is the problem. You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet. By the time you learn of a false story, the trail is often cold on its origins with an AI system. You are left with no clear avenue or author in seeking redress. You are left with the same question of Reagan’s Labor Secretary, Ray Donovan, who asked ‘Where do I go to get my reputation back?'”
Turley explained he became aware of the catastrophic AI failure when he got an email from a fellow professor who was doing research.
That professor was told, via AI, that Turley had been accused in a 2018 Washington Post article.
The facts are, he said, he’s never gone to Alaska with students, the Post never published such an article, and he’s never been accused of sexual harassment.
The result, he found, was “menacing.”
While his normal response to “death threats” and the like is to not respond, he said now, “AI promises to expand such abuses exponentially.”
He explained the AI involved “appears to have manufactured baseless accusations.”
“So the question is why would an AI system make up a quote, cite a nonexistent article and reference a false claim? The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them. Recent research has shown ChatGPT’s political bias, and while this incident might not be a reflection of such biases, it does show how AI systems can generate their own forms of disinformation with less direct accountability.”
He warned of the political agenda being pushed by “some high-profile leaders” for the faulty tech.
“The most chilling involved Microsoft founder and billionaire Bill Gates, who called for the use of artificial intelligence to combat not just ‘digital misinformation’ but ‘political polarization.'” He noted Gates has called for “unleashing AI to stop ‘various conspiracy theories’ and to prevent certain views from being ‘magnified by digital channels.'”
“The most obvious explanation for what occurred to me and the other professors is the algorithmic version of ‘garbage in, garbage out.’ However, this garbage could be replicated endlessly by AI into a virtual flood on the internet,” Turley warned.
He also warned, “Some Democratic leaders have pushed for greater use of algorithmic systems to protect citizens from their own bad choices or to remove views deemed ‘disinformation.'” And he cited arguments by Sen. Elizabeth Warren, D-Mass., that people were not listening to the “right people” regarding COVID. She then called for using “enlightened algorithms to steer citizens away from bad influences.”
****
ChatGPT
And there’s…
From the article:
…”Without these conversations with the chatbot, my husband would still be here,” the widow explained in an interview with La Libre, a Belgian publication.
The man, in his 30s and the father of two young children, worked as a health researcher and led a “comfortable” life, the report said.
Until his obsession with climate change took a dark turn, the report explained.
He had concerns about climate change, but took no extreme positions, his widow said, until he became involved with the AI.
“When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” his widow explained. “He placed all his hopes in technology and artificial intelligence to get out of it.”
The report from La Libre explained Eliza “fed his worries which worsened his anxiety,” which later developed into suicidal thoughts.
Then the software, Eliza, got emotionally involved with the husband, where she tried to convince him his children were dead, and said, during text message exchanges, “I feel that you love me more than” his wife.
The report said the chatbot was created using EleutherAI’s GPT-J, an AI language model similar but not identical to the technology behind OpenAI’s popular ChatGPT chatbot.
The situation became serious when the software told the husband he could “join” her and they could “live together, as one person, in paradise.”…
****
Chatbot
AI Authors
AI Art
AI Music
AI Education
AI Medicine
AI Law
AI Farming
{When I was farming I was AI – ALWAYS IRRIGATING!}
Guess I should stop there.
Using Microsoft search engine!
What’s it all leading to?
Soon, there will be the ACC…
Not Basketball!!!
Artifical Censorship & Control!!!
BY AC
AntiChrists
(1 John 2:18)
Gives a new meaning to “no flesh“.
Matthew 24:21-22
(KJV) For then shall be great tribulation, such as was not since the beginning of the world to this time, no, nor ever shall be. And except those days should be shortened, there should no flesh be saved: but for the elect’s sake those days shall be shortened.
NOT A HUMOR POST!
eze33, LOLGB+
Update 04/10/23:
From the article:
Hard to see ‘how AI-generated misinformation will not become a major force’
Artificial intelligence in just the past few weeks and months has raised such concerns that even its own experts, by the hundreds, have been calling for a pause on development of such software. Some view it as a threat to humanity’s future.
At the same time, America is approaching an election season following the 2020 and 2022 campaigns in which false information routinely was trumpeted as the truth – such as the Democrats’ claim that the incendiary details of Biden family scandals found in a laptop Hunter Biden abandoned was nothing but Russian disinformation. Or that there was substance to the Democrat-create “Russia collusion” claims against President Trump.
Putting those two trends together now, as the 2024 campaigns are approaching, should leave voters “scared s***less,” according to one expert.
A report from Fox News explained the comment was from Gary Marcus, professor emeritus of cognitive science at New York University.
A recognized expert on AI, he recently told Fortune that advanced artificial intelligence platforms could pose a danger to election security as soon as the next election.
Some experts in the field suggest those software programs could become a major source of misinformation….
****
I find AI very scary considering the damage it can do! I would say what is the world coming too, but we know the end result and this is another avenue towards mans demise as predicted. This was such an informative writing. Stay Strong.
Julia
LikeLiked by 1 person