07/07/2023 / By Belle Carter
Over the past few years, the possibility of encountering reports on major news portals that are misleading and politically driven is very high. One has to be very analytical to decipher whether the information is REAL or just government or “woke” mob propaganda.
Now, something huge is about to be embarked on by the mainstream media and the world’s largest technological companies. According to the Financial Times (FT), renowned publishers, including News Corp, Axel Springer, the New York Times and the Guardian, are negotiating with tech companies to strike landmark deals for using news content – which most likely is FAKE, or at least biased – to train their artificial intelligence (AI) chatbots.
FT reported that those involved in the initial discussions said that the deals could involve media organizations being paid a subscription-style fee for their content in order to develop the technology underpinning chatbots such as OpenAI’s ChatGPT and Google’s Bard. “The talks come as media groups express concern over the threat to the industry posed by the rise of AI, as well as fears over the use of their content by OpenAI and Google without deals in place. Some companies such as Stability AI and OpenAI are facing legal action from artists, photo agencies and coders, who allege contractual and copyright infringement,” FT further said.
Critics also worry as the proliferation of fake or biased information on everything related to elections, climate change, financial stability and just about everything else is prevalent. Once AI systems access this information, it will be passed on as fact to God knows how wide of a scope.
Additionally, News Corp CEO Robert Thomson said AI was “designed so the reader will never visit a journalism website, thus fatally undermining that journalism.” He added news outlets would argue vociferously for compensation. In fact, one industry executive said that the current discussions have revolved around a pricing model in the $5 million to $20 million per year level.
“In short, use their content to train your AI without paying, [then you’ll] get sued,” ZeroHedge‘s Tyler Durden wrote in response to this news.
So, if the two sectors agree and seal the deal, it would establish a blueprint for news organizations dealing with generative AI companies worldwide.
“Copyright is a crucial issue for all publishers,” said FT, which is also in negotiations over the matter. “As a subscription business, we need to protect the value of our journalism and our business model. Engaging in constructive dialogue with the relevant companies, as we are, is the best way to achieve that.”
According to the report, media industry executives want to avoid the pitfalls of the early internet, when they undermined their own business models by giving away so much news for free, while Big Tech as Google and Facebook then accessed that information to grow their multibillion-dollar advertising platforms.
People love how conveniently AI-written information is accessed primarily for those who want instant answers to just about any question under the sun, regardless of the information’s legitimacy.
To that end, Google announced in May that a generative search function has been placed in the most valuable real estate on the internet: its existing search results, which returns an AI-written information box above its traditional format of web links. It has launched in the U.S. and is gearing up for release worldwide.
Back in May, the tech giant’s VP of Search Liz Reid flipped her laptop open and started typing into the search box in a demonstration session. She entered the inquiry: “Why is sourdough bread still so popular?” Google’s normal search results load almost immediately. Above them, a rectangular orange section pulsed and glowed and showed the phrase “Generative AI is experimental.” A few seconds later, the glowing is replaced by an AI-generated summary detailing how good sourdough tastes, the upsides of its prebiotic abilities, etc. To the right, there are three links to sites with information that Reid says corroborates what’s in the summary. This, according to the tech company, is the “AI snapshot,” which all came from Google’s large language models (LLM) sourced from the open web.
The search engine company’s exec then moused up to the top right of the box and clicked an icon they call the “bear claw,” which opened a new view. The AI snapshot split sentence by sentence, with links underneath to the sources of the information for that specific sentence. Reid pointed out again that this is corroboration. And she says it’s key to the way Google’s AI implementation is different. “We want [the LLM], when it says something, to tell us as part of its goal: what are some sources to read more about that?”
Many searches are already well served by the results, however, there are still sets of queries that did not seem to work at that time. And an interesting thing happened — Google AI expressed an opinion during the demo.
David Pierce, the Verge‘s editor-at-large asked Reid to search only the word “Adele.” AI snapshot almost instantly returned with some information about the English singer-songwriter’s past, her accolades as a singer and a note about her recent weight loss. Then it said: “Her live performances are even better than her recorded albums.” Reid quickly clicked the bear claw and sourced that sentence to a music blog. But it also acknowledged that this was something of a system failure.
Sounded like a really biased opinion. (Related: AI chatbots can be programmed to influence extremists into launching terror attacks.)
FutureTech.news has more stories on how tech firms urge people to be dependent on AI-based information.
Tagged Under:
AI, Axel Springer, biased, Big Tech, chatbots, ChatGPT, Collapse, computing, conspiracy, cyborg, deception, experimental, Fake, fake news, future tech, generative AI, Glitch, Google, Google's Bard, landmark deals, large language models, lies, mainstream media, Microsoft, msm, News Corp, news outlets, OpenAI, political, subscription-style fee, tech giants, The Guardian, The New York Times
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 GLITCH.NEWS
All content posted on this site is protected under Free Speech. Glitch.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Glitch.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.