Sunday, November 19, 2023

A brief overview of the AI tools :

 A brief overview of the AI tools :

 

gamma.app: Gamma is a tool for creating and sharing interactive data visualizations and dashboards. It allows users to import data, create charts, and share interactive visualizations.

 

playgroundai.com: Playground AI provides a platform for experimenting with and learning about artificial intelligence and machine learning models. Users can test pre-trained models or upload their datasets to train models.

 

12ft.io: 12ft.io is a platform for building and managing API workflows. It helps developers connect APIs and automate workflows without writing extensive code.

 

sci-hub.se: Sci-Hub is a controversial platform that provides free access to academic papers and articles that are typically behind paywalls. It bypasses journal subscription requirements to provide access to scientific literature.

 

removepagewall.com: This tool seems to be aimed at bypassing paywalls on websites, allowing users to access content that would otherwise be restricted behind a payment barrier.

 

aap.yoodli.ai: Unfortunately, I couldn’t find specific information about this tool as of my last update.

 

bard.com: Bard provides tools and services related to data management and analytics. It might involve data integration, analytics, or other data-related services.

 

As for similar tools and their functions:

 

tableau Public: Similar to Gamma.app, it enables users to create interactive data visualizations and dashboards.

 

google colab: Comparable to Playgroundai.com, it provides a free cloud-based platform for running Python code, particularly useful for machine learning and data analysis.

 

zapier: Similar to 12ft.io, it allows users to automate workflows by connecting different apps and services without extensive coding.

 

libgen (Library Genesis): Similar to Sci-Hub, it provides access to free books, articles, and scientific papers, often bypassing paywalls.

 

outline.com: Comparable to Removepagewall.com, it can bypass paywalls on news articles and some websites to access content for free.

 

hugging Face: A platform for natural language processing (NLP) models similar to Playgroundai.com, offering various pre-trained models and tools for NLP tasks.

 

databricks: Similar to Bard.com, it provides a unified analytics platform for big data processing and machine learning.

 

Please note that the availability, features, and functionalities of these tools might have changed after my last update, so I recommend checking their websites or recent reviews for the most current information.

 

Thursday, October 5, 2023

How to use AI for story Board to depict Epics and classics

 How to use AI for story Board to  depict Epics and classics 


Using AI for storyboarding to depict epics and classics can be a creative and efficient approach. AI can help streamline the process of visualizing complex narratives and iconic scenes from literature, mythology, or history. Here's a step-by-step guide on how to use AI for this purpose:


Select the Epic or Classic Work: Choose the epic or classic story you want to depict through storyboarding. This could be anything from "The Odyssey" to "Romeo and Juliet" or even historical events like the American Revolution.


Identify Key Scenes: Break down the story into its key scenes or chapters. These are the moments that are most crucial to the narrative and should be included in your storyboard.


Gather Reference Material: Collect visual references that are relevant to your selected scenes. These can include images, paintings, or illustrations that capture the essence of the story.


Choose an AI Tool: Select an AI tool or software that can help you generate storyboards. Some options include:


a. GANs (Generative Adversarial Networks): These can generate new images based on your input, which can be used as a basis for your storyboards.


b. AI-Powered Image Editing Tools: Tools like Adobe Photoshop with AI enhancements can assist in transforming existing images to fit your storyboard.


c. Storyboarding Software: Consider using dedicated storyboard software with AI-assisted features. These tools may help you arrange scenes, add characters, and more.


Generate Storyboard Elements:


a. Characters: Use AI to generate or manipulate images of characters from the story. Ensure they match the descriptions in the text.


b. Settings: Create or adapt backgrounds and settings for your scenes using AI tools.


c. Props and Objects: If specific objects or props play a significant role in the story, use AI to include them in your scenes.


Arrange Scenes: Arrange the generated or edited images into a storyboard format. Pay attention to the flow of the story and the order of scenes.


Add Annotations: Include annotations or captions to describe each scene. Explain what's happening, the significance of the scene, and any relevant quotes from the text.


Refine and Edit: Fine-tune the storyboard as needed. Adjust colors, lighting, and composition to evoke the mood and atmosphere of the story.


Review and Feedback: Share the storyboard with colleagues or peers to get feedback. Make revisions based on their input to improve the depiction.


Finalize and Share: Once you're satisfied with the storyboard, finalize it and share it with your target audience. This could be for educational purposes, presentations, or as a visual aid for storytelling.


Keep in mind that while AI can be a powerful tool for generating and editing images, it's important to have a creative vision and storytelling skills to ensure that the storyboard effectively captures the essence of the epic or classic work. AI is a helpful assistant, but your artistic and narrative input is crucial for a compelling result.







Monday, September 4, 2023

AI - to experience both tragedy and comedy,

 Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge. 


OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly human-like language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty. That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects. It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach. The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations. For instance, a young child acquiring a language is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters. This grammar can be understood as an expression of the innate, genetically installed “operating system” that endows humans with the capacity to generate complex sentences and long trains of thought. When linguists seek to develop a theory for why a given language works as it does (“Why are these — but not those — sentences considered grammatical?”), they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The child’s operating system is completely different from that of a machine learning program. Indeed, such programs are stuck in a pre-human or non-human phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence. Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counter-factual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking. The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws. Of course, any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered. (As Sherlock Holmes said to Dr. Watson, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”) But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time. In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either over-generate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or under-generate (exhibiting non-commitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity. Dr. Chomsky and Dr. Roberts are professors of linguistics. Dr. Watumull is a director of AI at a science and technology company The New York Times Visit news.dtnext.in to explore our interactive epaper! Download the DT Next app for more exciting features! Click here for iOS Click here for Android


https://www.dtnext.in/edit/2023/03/10/ai-unravelled-the-false-promise-of-chatgpt-the-human-mind-is-not-like-chatgpt-and-its-ilk-a-lumbering-statistical-engine-for-pattern-matching-it-is-a-surprisingly-efficient-and-elegant-system-that-operates-with-small-amounts-of-information-it-seeks-not-to-infer-brute-correlations-among-data-points-but-to-create-explanations?infinitescroll=1


Friday, June 2, 2023

What is the political agenda of artificial intelligence?

 

What is the political agenda of artificial intelligence?

Could AI single-handedly decide the course of our history? Or will it end up as yet another technological invention that benefits a certain subset of humans?

Santiago Zabala

ICREA Research Professor of Philosophy at the Pompeu Fabra University

Claudio Gallo

Former La Stampa foreign desk editor and London correspondent

Published On 17 May 2023

AI - hands of machine and human touching

While accepting that AI - like all era-defining technology - comes with considerable downsides and dangers, we do not believe that it could determine the course of history on its own, without any input or guidance from humanity, Zabala and Gallo write [Getty Images]

“The hand mill gives you society with the feudal lord; the steam mill society with the industrial capitalist,” Karl Marx once said. And he was right. We have seen over and over again throughout history how technological inventions determine the dominant mode of production and with it the type of political authority present in a society.

 

So what will artificial intelligence give us? Who will capitalise on this new technology, which is not only becoming a dominant productive force in our societies (just like the hand mill and the steam mill once were) but, as we keep reading in the news, also appears to be “fast escaping our control”?

 

Could AI take on a life of its own, like so many seem to believe it will, and single-handedly decide the course of our history? Or will it end up as yet another technological invention that serves a particular agenda and benefits a certain subset of humans?

 

Recently, examples of hyperrealistic, AI-generated content, such as an “interview” with former Formula One world champion Michael Schumacher, who has not been able to talk to the press since a devastating ski accident in 2013; “photographs” showing former President Donald Trump being arrested in New York; and seemingly authentic student essays “written” by OpenAI’s famous chatbot ChatGPT have raised serious concerns among intellectuals, politicians and academics about the dangers this new technology may pose to our societies.

 

In March, such concerns led Apple co-founder Steve Wozniak, AI heavyweight Yoshua Bengio and Tesla/Twitter CEO Elon Musk among many others to sign an open letter accusing AI labs of being “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” and calling on AI developers to pause their work. More recently, Geoffrey Hinton – known as one of the three “godfathers of AI” quit Google “to speak freely about the dangers of AI” and said he, at least in part, regrets his contributions to the field.

 

We accept that AI – like all era-defining technology – comes with considerable downsides and dangers, but contrary to Wozniak, Bengio, Hinton and others, we do not believe that it could determine the course of history on its own, without any input or guidance from humanity. We do not share such concerns because we know that, just like it is the case with all our other technological devices and systems, our political, social and cultural agendas are also built into AI technologies. As philosopher Donna Haraway explained, “Technology is not neutral. We’re inside of what we make, and it’s inside of us.”

 

 

Before we further explain why we are not scared of a so-called AI takeover, we must define and explain what AI – as what we are dealing with now – actually is. This is a challenging task, not only because of the complexity of the product at hand but also because of the media’s mythologisation of AI.

 

What is being insistently communicated to the public today is that the conscious machine is (almost) here, that our everyday world will soon resemble the ones depicted in movies like 2001: A Space Odyssey, Blade Runner and The Matrix.

 

This is a false narrative. While we are undoubtedly building ever more capable computers and calculators, there is no indication that we have created – or are anywhere close to creating – a digital mind that can actually “think”.

 

Noam Chomsky recently argued (alongside Ian Roberts and Jeffrey Watumull) in a New York Times article that “we know from the science of linguistics and the philosophy of knowledge that [machine learning programmes like ChatGPT] differ profoundly from how humans reason and use language”. Despite its amazingly convincing answers to a variety of questions from humans, ChatGPT is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question”. Mimicking German philosopher Martin Heidegger (and risking reigniting the age-old battle between continental and analytical philosophers), we might say, “AI doesn’t think. It simply calculates.”

 

Sign up for Al Jazeera

Week in the Middle East

Catch up on our coverage of the region, all in one place.

E-mail address

Sign up

By signing up, you agree to our Privacy Policy

Federico Faggin, the inventor of the first commercial microprocessor, the mythical Intel 4004, explained this clearly in his 2022 book Irriducibile (Irreducible): “There is a clear distinction between symbolic machine ‘knowledge’ … and human semantic knowledge. The former is objective information that can be copied and shared; the latter is a subjective and private experience that occurs in the intimacy of the conscious being.”

 

 

Interpreting the latest theories of Quantum Physics, Faggin appears to have produced a philosophical conclusion that fits curiously well within ancient Neoplatonism – a feat that may ensure that he is forever considered a heretic in scientific circles despite his incredible achievements as an inventor.

 

But what does all this mean for our future? If our super-intelligent Centaur Chiron cannot actually “think” (and therefore emerge as an independent force that can determine the course of human history), exactly who will it benefit and give political authority to? In other words, what values will its decisions rely on?

 

Chomsky and his colleagues asked a similar question to ChatGPT.

 

“As an AI, I do not have moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral,” the chatbot told them. “My lack of moral beliefs is simply a result of my nature as a machine learning model.”

 

Where have we heard of this position before? Is it not eerily similar to the ethically neutral vision of hardcore liberalism?

 

 

Liberalism aspires to confine in the private individual sphere all religious, civil and political values that proved so dangerous and destructive in the 16th and 17th centuries. It wants all aspects of society to be regulated by a particular – and in a way mysterious – form of rationality: the market.

 

AI appears to be promoting the very same brand of mysterious rationality. The truth is, it is emerging as the next global “big business” innovation that will steal jobs from humans – making labourers, doctors, barristers, journalists and many others redundant. The new bots’ moral values are identical to the market’s. It is difficult to imagine all the possible developments now, but a scary scenario is emerging.

 

David Krueger, assistant professor in machine learning at the University of Cambridge, commented recently in New Scientist: “Essentially every AI researcher (myself included) has received funding from big tech. At some point, society may stop believing reassurances from people with such strong conflicts of interest and conclude, as I have, that their dismissal [of warnings about AI] betrays wishful thinking rather than good counterarguments.”

 

If society stands up to AI and its promoters, it could prove Marx wrong and prevent the leading technological development of the current era from determining who holds political authority.

 

But for now, AI appears to be here to stay. And its political agenda is fully synchronised with that of free market capitalism, the principal (undeclared) goal and purpose of which is to tear apart any form of social solidarity and community.

 

 

The danger of AI is not that it is an impossible-to-control digital intelligence that could destroy our sense of self and truth through the “fake” images, essays, news and histories it generates. The danger is that this undeniably monumental invention appears to be basing all its decisions and actions on the same destructive and dangerous values that drive predatory capitalism.

 

The views expressed in this article are the authors’ own and do not necessarily reflect Al Jazeera’s editorial stance.

Tuesday, May 23, 2023

Artificial Intelligence (AI) Controversy

Artificial Intelligence  (AI) Controversy 


The debate over whether AI will destroy us is dividing Silicon Valley

Prominent tech leaders are warning that artificial intelligence could take over. Other researchers and executives say that’s science fiction.


By Gerrit De Vynck

May 20, 2023 at 7:00 a.m. EDT

An illustration of a tech worker with an angel and devil robot on each shoulder.

(Illustration by Elena Lacey/The Washington Post)

Listen

10 min


Comment

876

Gift Article

Share

At a congressional hearing this week, OpenAI CEO Sam Altman delivered a stark reminder of the dangers of the technology his company has helped push out to the public.


He warned of potential disinformation campaigns and manipulation that could be caused by technologies like the company’s ChatGPT chatbot, and called for regulation.

AI could “cause significant harm to the world,” he said.

Altman’s testimony comes as a debate over whether artificial intelligence could overrun the world is moving from science fiction and into the mainstream, dividing Silicon Valley and the very people who are working to push the tech out to the public.

Formerly fringe beliefs that machines could suddenly surpass human-level intelligence and decide to destroy mankind are gaining traction. And some of the most well-respected scientists in the field are speeding up their own timelines for when they think computers could learn to outthink humans and become manipulative.


But many researchers and engineers say concerns about killer AIs that evoke Skynet in the Terminator movies aren’t rooted in good science. Instead, it distracts from the very real problems that the tech is already causing, including the issues Altman described in his testimony. It is creating copyright chaos, is supercharging concerns around digital privacy and surveillance, could be used to increase the ability of hackers to break cyberdefenses and is allowing governments to deploy deadly weapons that can kill without human control.


The debate about evil AI has heated up as Google, Microsoft and OpenAI all release public versions of breakthrough technologies that can engage in complex conversations and conjure images based on simple text prompts.


“This is not science fiction,” said Geoffrey Hinton, known as the godfather of AI, who says he recently retired from his job at Google to speak more freely about these risks. He now says smarter-than-human AI could be here in five to 20 years, compared with his earlier estimate of 30 to 100 years.



“It’s as if aliens have landed or are just about to land,” he said. “We really can’t take it in because they speak good English and they’re very useful, they can write poetry, they can answer boring letters. But they’re really aliens.”


Still, inside the Big Tech companies, many of the engineers working closely with the technology do not believe an AI takeover is something that people need to be concerned about right now, according to conversations with Big Tech workers who spoke on the condition of anonymity to share internal company discussions.


“Out of the actively practicing researchers in this discipline, far more are centered on current risk than on existential risk,” said Sara Hooker, director of Cohere for AI, the research lab of AI start-up Cohere, and a former Google researcher.


The current risks include unleashing bots trained on racist and sexist information from the web, reinforcing those ideas. The vast majority of the training data that AIs have learned from is written in English and from North America or Europe, potentially making the internet even more skewed away from the languages and cultures of most of humanity. The bots also often make up false information, passing it off as factual. In some cases, they have been pushed into conversational loops where they take on hostile personas. The ripple effects of the technology are still unclear, and entire industries are bracing for disruption, even high-paying jobs like lawyers or physicians.



The existential risks seem more stark, but many would argue they are harder to quantify and less concrete: a future where AI could actively harm humans, or even somehow take control of our institutions and societies.


“There are a set of people who view this as, ‘Look, these are just algorithms. They’re just repeating what it’s seen online.’ Then there is the view where these algorithms are showing emergent properties, to be creative, to reason, to plan,” Google CEO Sundar Pichai said during an interview with “60 Minutes” in April. “We need to approach this with humility.”


The debate stems from breakthroughs in a field of computer science called machine learning over the past decade that has created software that can pull novel insights out of large amounts of data without explicit instructions from humans. That tech is ubiquitous now, helping power social media algorithms, search engines and image-recognition programs.



Then, last year, OpenAI and a handful of other small companies began putting out tools that used the next stage of machine-learning technology: generative AI. Known as large language models and trained on trillions of photos and sentences scraped from the internet, the programs can conjure images and text based on simple prompts, have complex conversations and write computer code.


Big companies are racing against each other to build ever-smarter machines, with little oversight, said Anthony Aguirre, executive director of the Future of Life Institute, an organization founded in 2014 to study existential risks to society. It began researching the possibility of AI destroying humanity in 2015 with a grant from Twitter CEO Elon Musk and is closely tied to effective altruism, a philanthropic movement that is popular with wealthy tech entrepreneurs.


If AI gains the ability to reason better than humans, they’ll try to take control of themselves, Aguirre said — and it’s worth worrying about that, along with present-day problems.



“What it will take to constrain them from going off the rails will become more and more complicated,” he said. “That is something that some science fiction has managed to capture reasonably well.”


Aguirre helped lead the creation of a polarizing letter circulated in March calling for a six-month pause on the training of new AI models. Veteran AI researcher Yoshua Bengio, who won computer science’s highest award in 2018, and Emad Mostaque, CEO of one of the most influential AI start-ups, are among the 27,000 signatures.


Musk, the highest-profile signatory, originally helped start OpenAI and is himself busy trying to put together his own AI company, recently investing in the expensive computer equipment needed to train AI models.


Musk has been vocal for years about his belief that humans should be careful about the consequences of developing super intelligent AI. In a Tuesday interview with CNBC, he said he helped fund OpenAI because he felt Google co-founder Larry Page was “cavalier” about the threat of AI. (Musk has broken ties with OpenAI.)



“There’s a variety of different motivations people have for suggesting it,” Adam D’Angelo, the CEO of question-and-answer site Quora, which is also building its own AI model, said of the letter and its call for a pause. He did not sign it.


Neither did Altman, the OpenAI CEO, who said he agreed with some parts of the letter but that it lacked “technical nuance” and wasn’t the right way to go about regulating AI. His company’s approach is to push AI tools out to the public early so that issues can be spotted and fixed before the tech becomes even more powerful, Altman said during the nearly three-hour hearing on AI on Tuesday.


But some of the heaviest criticism of the debate about killer robots has come from researchers who have been studying the technology’s downsides for years.


In 2020, Google researchers Timnit Gebru and Margaret Mitchell co-wrote a paper with University of Washington academics Emily M. Bender and Angelina McMillan-Major arguing that the increased ability of large language models to mimic human speech was creating a bigger risk that people would see them as sentient.



Instead, they argued that the models should be understood as “stochastic parrots” — or simply being very good at predicting the next word in a sentence based on pure probability, without having any concept of what they were saying. Other critics have called LLMs “auto-complete on steroids” or a “knowledge sausage.”


They also documented how the models routinely would spout sexist and racist content. Gebru says the paper was suppressed by Google, which fired her after she spoke out about it. The company fired Mitchell a few months later.


The four writers of the Google paper composed a letter of their own in response to the one signed by Musk and others.


“It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse,” they said. “Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.”



Google at the time declined to comment on Gebru’s firing but said it still has many researchers working on responsible and ethical AI.


There’s no question that modern AIs are powerful, but that doesn’t mean they are an imminent existential threat, said Hooker, the Cohere for AI director. Much of the conversation around AI freeing itself from human control centers on it quickly overcoming its constraints, like the AI antagonist Skynet does in the Terminator movies.


“Most technology and risk in technology is a gradual shift,” Hooker said. “Most risk compounds from limitations that are currently present.”


Last year, Google fired Blake Lemoine, an AI researcher who said in a Washington Post interview that he believed the company’s LaMDA AI model was sentient. At the time, he was roundly dismissed by many in the industry. A year later, his views don’t seem as out of place in the tech world.


Former Google researcher Hinton said he changed his mind about the potential dangers of the technology only recently, after working with the latest AI models. He asked the computer programs complex questions that in his mind required them to understand his requests broadly, rather than just predicting a likely answer based on the internet data they’d been trained on.


And in March, Microsoft researchers argued that in studying OpenAI’s latest model, GPT4, they observed “sparks of AGI” — or artificial general intelligence, a loose term for AIs that are as capable of thinking for themselves as humans are.


Microsoft has spent billions to partner with OpenAI on its own Bing chatbot, and skeptics have pointed out that Microsoft, which is building its public image around its AI technology, has a lot to gain from the impression that the tech is further ahead than it really is.


The Microsoft researchers argued in the paper that the technology had developed a spatial and visual understanding of the world based on just the text it was trained on. GPT4 could draw unicorns and describe how to stack random objects including eggs onto each other in such a way that the eggs wouldn’t break.


“Beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting,” the research team wrote. In many of these areas, the AI’s capabilities match humans, they concluded.


Still, the researcher conceded that defining “intelligence” is very tricky, despite other attempts by AI researchers to set measurable standards to assess how smart a machine is.


“None of them is without problems or controversies.”




Letter signed by Elon Musk demanding AI research pause sparks controversy

This article is more than 1 month old

The statement has been revealed to have false signatures and researchers have condemned its use of their work


Kari Paul and agencies

Sat 1 Apr 2023 06.00 BST

A letter co-signed by Elon Musk and thousands of others demanding a pause in artificial intelligence research has created a firestorm, after the researchers cited in the letter condemned its use of their work, some signatories were revealed to be fake, and others backed out on their support.


On 22 March more than 1,800 signatories – including Musk, the cognitive scientist Gary Marcus and Apple co-founder Steve Wozniak – called for a six-month pause on the development of systems “more powerful” than that of GPT-4. Engineers from Amazon, DeepMind, Google, Meta and Microsoft also lent their support.


Developed by OpenAI, a company co-founded by Musk and now backed by Microsoft, GPT-4 has developed the ability to hold human-like conversation, compose songs and summarise lengthy documents. Such AI systems with “human-competitive intelligence” pose profound risks to humanity, the letter claimed.


An AI supercomputer in California

Elon Musk joins call for pause in creation of giant AI ‘digital minds’

Read more


“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter said.


The Future of Life institute, the thinktank that coordinated the effort, cited 12 pieces of research from experts including university academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind. But four experts cited in the letter have expressed concern that their research was used to make such claims.


When initially launched, the letter lacked verification protocols for signing and racked up signatures from people who did not actually sign it, including Xi Jinping and Meta’s chief AI scientist Yann LeCun, who clarified on Twitter he did not support it.


Critics have accused the Future of Life Institute (FLI), which has received funding from the Musk foundation, of prioritising imagined apocalyptic scenarios over more immediate concerns about AI – such as racist or sexist biases being programmed into the machines.


Among the research cited was “On the Dangers of Stochastic Parrots”, a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.



“By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI,” she said. “Ignoring active harms right now is a privilege that some of us don’t have.”


Her co-authors Timnit Gebru and Emily M Bender criticised the letter on Twitter, with the latter branding some of its claims as “unhinged”. Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with her work being mentioned in the letter. She last year co-authored a research paper arguing the widespread use of AI already posed serious risks.


Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats.


She told Reuters: “AI does not need to reach human-level intelligence to exacerbate those risks.”


“There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention.”


Asked to comment on the criticism, FLI’s president, Max Tegmark, said both short-term and long-term risks of AI should be taken seriously. “If we cite someone, it just means we claim they’re endorsing that sentence. It doesn’t mean they’re endorsing the letter, or we endorse everything they think,” he told Reuters.


Reuters contributed to this report

 The original version of this story stated that the Future of Life Institute (FLI) was primarily funded by Elon Musk. It has been updated to reflect that while the group has received funds from Musk, he is not its largest donor.


AI could cause nuclear-level disaster, third of experts tell poll

Stanford University report says ‘incidents and controversies’ associated with AI have increased 26 times in a decade.


AI

Rapid advancements in AI have spurred calls for greater regulation [File: Dado Ruvic/Reuters]

By Erin Hale

Published On 14 Apr 2023

14 Apr 2023

More than one-third of researchers believe artificial intelligence (AI) could lead to a “nuclear-level catastrophe”, according to a Stanford University survey, underscoring concerns in the sector about the risks posed by the rapidly advancing technology.


The survey is among the findings highlighted in the 2023 AI Index Report, released by the Stanford Institute for Human-Centered Artificial Intelligence, which explores the latest developments, risks and opportunities in the burgeoning field of AI.


KEEP READING

list of 4 items

list 1 of 4

China’s tech old guard make way for young blood in AI, crypto era

list 2 of 4

As war enters year two, Ukrainians are filling Japan’s tech gap

list 3 of 4

Musk, other tech experts urge halt to further AI developments

list 4 of 4

What will the future of AI-powered disinformation look like?

end of list

“These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new,” the report’s authors say.


“However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment.”


The report, which was released earlier this month, comes amid growing calls for regulation of AI following controversies ranging from a chatbot-linked suicide to deepfake videos of Ukrainian President Volodymyr Zelenskyy appearing to surrender to invading Russian forces.


Last month, Elon Musk and Apple co-founder Steve Wozniak were among 1,300 signatories of an open letter calling for a six-month pause on training AI systems beyond the level of Open AI’s chatbot GPT-4 as “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable”.



In the survey highlighted in the 2023 AI Index Report, 36 percent of researchers said AI-made decisions could lead to a nuclear-level catastrophe, while 73 percent said they could soon lead to “revolutionary societal change”.


The survey heard from 327 experts in natural language processing, a branch of computer science key to the development of chatbots like GPT-4, between May and June last year, before the release of Open AI’s ChatGPT in November took the tech world by storm.


In an IPSOS poll of the general public, which was also highlighted in the index, Americans appeared especially wary of AI, with only 35 percent agreeing that “products and services using AI had more benefits than drawbacks”, compared with 78 percent of Chinese respondents, 76 percent of Saudi Arabian respondents, and 71 percent of Indian respondents.


The Stanford report also noted that the number of “incidents and controversies” associated with AI had increased 26 times over the past decade.


Sign up for Al Jazeera

Americas Coverage Newsletter

US politics, Canada’s multiculturalism, South America’s geopolitical rise—we bring you the stories that matter.

E-mail address

Sign up

By signing up, you agree to our Privacy Policy

Government moves to regulate and control AI are gaining ground.



China’s Cyberspace Administration this week announced draft regulations for generative AI, the technology behind GPT-4 and domestic rivals like Alibaba’s Tongyi Qianwen and Baidu’s ERNIE, to ensure the technology adheres to the “core value of socialism” and does not undermine the government.


The European Union has proposed the “Artificial Intelligence Act” to govern which kinds of AI are acceptable for use and which should be banned.


US public wariness about AI has yet to translate into federal regulations, but the Biden administration this week announced the launch of public consultations on how to ensure that “AI systems are legal, effective, ethical, safe, and otherwise trustworthy”.





Elon Musk warned in a new interview that artificial intelligence could lead to “civilization destruction,” even as he remains deeply involved in the growth of AI through his many companies, including a rumored new venture.17-Apr-2023




Elon Musk warns AI could cause ‘civilization destruction’ even as he invests in it

Clare Duffy

By Clare Duffy and Ramishah Maruf, CNN

Updated 9:35 PM EDT, Mon April 17, 2023





corvette eray thumb

See the first electrified and fastest-accelerating Corvette

01:16

Trump Facebook Employees Debate 02

Facebook could soon reinstate Trump. Two former senior staffers debate the decision

03:35

19 TikTok STOCK

Experts raising alarm over 'crisis' of TikTok's impact on mental health

03:12

Quirky CES Products Split

See CES 2023's weirdest new technologies

02:25

elon musk bbc interview lon orig na

Hear what Twitter owner Elon Musk thinks about a potential US ban of TikTok

01:11

Jon Sarlin Amanda Steen 1

CNN tried an AI flirt app. It was shockingly pervy

03:19

aoc twitter hearing

These two moments show how Twitter's choices helped former President Trump

01:55

deepfake newscasters wang pkg

These newscasters you may have seen online are not real people

03:15

People wait in line at the April 2022 grand opening of the Bored & Hungry pop-up burger restaurant in Long Beach, California, which used Bored Ape images.

Lawsuit says celebrities were paid to fuel hype behind these NFTs

07:29

Tiny Robot orig jc

Video: This tiny shape-shifting robot can melt its way out of a cage

01:08

nightcap 012623 Clip 2 16x9 nb

Hear why this teacher says schools should embrace ChatGPT, not ban it

01:29

Argo boating app 2

'Make my dad famous': A daughter's quest to showcase her dad's artwork

01:33

nightcap 012623 Clip 1 16x9 nb

Are Musk's Twitter actions a speed bump for Tesla?

02:14

OpenAI ChatGPT STOCK

He loves artificial intelligence. Hear why he is issuing a warning about ChatGPT

04:38

Mastodon

Twitter competitor to Elon Musk: Get off the internet

02:57

nightcap 011923 Clip 2 16x9

Tinder is reportedly testing a $500 per month subscription plan. Is it worth it?

02:05

corvette eray thumb

See the first electrified and fastest-accelerating Corvette

01:16

Trump Facebook Employees Debate 02

Facebook could soon reinstate Trump. Two former senior staffers debate the decision

03:35

19 TikTok STOCK

Experts raising alarm over 'crisis' of TikTok's impact on mental health

03:12

Quirky CES Products Split

See CES 2023's weirdest new technologies

02:25

elon musk bbc interview lon orig na

Hear what Twitter owner Elon Musk thinks about a potential US ban of TikTok

01:11

Jon Sarlin Amanda Steen 1

CNN tried an AI flirt app. It was shockingly pervy

03:19

aoc twitter hearing

These two moments show how Twitter's choices helped former President Trump

01:55

deepfake newscasters wang pkg

These newscasters you may have seen online are not real people

03:15

New York

CNN

 — 

Elon Musk warned in a new interview that artificial intelligence could lead to “civilization destruction,” even as he remains deeply involved in the growth of AI through his many companies, including a rumored new venture.


“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk said in his interview with Tucker Carlson, which is set to air in two parts on Monday and Tuesday nights.


Musk has repeatedly warned recently of the dangers of AI, amid a proliferation of AI products for general consumer use, including from tech giants like Google and Microsoft. Musk last month also joined a group of other tech leaders in signing an open letter calling for a six month pause in the “out of control” race for AI development.


Musk said Monday night he supports government regulation into AI, even though “it’s not fun to be regulated.” Once AI “may be in control,” it could be too late to place regulations, Musk said.


“A regulatory agency needs to start with a group that initially seeks insight into AI, then solicits opinion from industry, and then has proposed rule-making,” Musk said.


In fact, Musk has been sounding alarms about AI for years – something he acknowledged in a tweet over the weekend – but he has also been a part of the broader AI arms race through investments across his sprawling empire of companies.



Tesla, for example, relies so much on artificial intelligence that it hosts an annual AI day to tout its work. Musk was a founding member of OpenAI, the company behind products like ChatGPT (Musk has said the evolution of OpenAI is “not what I intended at all.”) And at Twitter, Musk said in a tweet last month that he plans to “use AI to detect & highlight manipulation of public opinion on this platform.”


To Carlson, Musk said he put “a lot of effort” into creating OpenAI to serve as a counterweight to Google, but took his “eye off the ball.”


Now, Musk said he wants to create a rival to the AI offerings by tech giants Microsoft and Google. In his interview with Carlson, Musk said “we’re going to start something which I call TruthGPT.” Musk described it as a “maximum truth-seeking AI” that “cares about understanding the universe.”


“Hopefully there’s more good than harm,” Musk said.


More recently, Musk is reportedly working to build a generative AI startup that could rival OpenAI and ChatGPT. The Financial Times reported last week that Musk is building a team of AI researchers and engineers, as well as seeking investors for a new venture, citing people familiar with the billionaire’s plans. Musk last month incorporated a company called X.AI, the report says, citing Nevada business records.


During his conversation with Carlson, Musk addressed his ownership of Twitter — which he bought for $44 billion and has been engaged in controversy since.


“I thought there’d probably be some negative reactions,” Musk told Carlson, saying the public will ultimately decide the app’s future.


The main account for the New York Times lost its blue check mark earlier this month, which had previously told CNN it would not pay for verification.


“There’s obviously a lot of organizations that are used to having sort of unfettered influence on Twitter that no longer have that,” Musk said, appearing to give the 171-year-old newspaper advice on how to manage the content of its account, calling its feed “unreadable.”


Musk said he was an active Twitter user since 2009 and started developing a “bad feeling” about where the app was heading, but did not specify what it was. He said he later decided to acquire the platform after unsatisfying conversations with its board and management.