14.05.2024
328
Elon Musk wants to "understand the true nature of the universe". At least that's what he said on his website announcing the launch of his new artificial intelligence company. The only question on his mind is "What impact will this development have on society?".
Musk reportedly founded xAI in Nevada and purchased "about 10,000 graphics processing units" - the equipment needed to develop and run advanced artificial intelligence systems. The company has not disclosed the source of its funding, but the Financial Times reported in April that Musk was in talks to secure funding from investors in SpaceX and Tesla, the companies he heads. The company hasn't revealed many details about its intentions, but said on its website that its team will participate in a conference call at Twitter Spaces to answer questions. xAI said the artificial intelligence company will work closely with Twitter, Tesla and others to achieve its mission.
The xAI team, led by Musk, includes former employees of leading artificial intelligence companies OpenAI and DeepMind, as well as Microsoft and Tesla. Among the consultants is Dan Hendricks, director of the Center for Artificial Intelligence Security. The Center for Artificial Intelligence Security has a strong focus on security.
In May of this year, the organization released a statement signed by hundreds of AI scientists and experts, as well as executives from some of the leading AI companies, saying that reducing the risk of extinction from AI should be a global priority alongside other societal risks such as pandemics and nuclear war. Demis Hassabis, of DeepMind, Inc. Hassabis, OpenAI Sam Altman and Anthropic Dario Amodei were among the signatories. Musk did not sign the Center for AI Safety's statement, but he did sign an open letter published in March by the Future of Life Institute calling on AI companies to "immediately suspend training AI systems more powerful than GPT-4 for at least six months".
Musk is one of the founding presidents of OpenAI, along with OpenAI Altman. Musk was part of a group of investors, including Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services, Infosys and YC Research, who committed $1 billion in funding to OpenAI in 2015. Musk has stated that he contributed $100 million of that $1 billion.
The exact circumstances of Musk's departure are not entirely clear. According to an OpenAI blog post and Musk's subsequent tweets, he left OpenAI to avoid a conflict of interest when Tesla began to focus more on artificial intelligence. Semafor later reported that Musk offered to lead OpenAI but left after the offer was rejected. The FT said the reason for Musk's departure was conflicts with other board members and employees over OpenAI's approach to AI security.
Since leaving the company, Musk has criticized OpenAI's direction. In an interview with Fox News' Tucker Carlson in April this year, Musk said: "They're now closed-source, clearly commercial and closely tied to Microsoft". The partnership between Microsoft and OpenAI is worth billions of dollars: OpenAI gets access to Microsoft's cloud computing in exchange for OpenAI's artificial intelligence systems, which are used in Microsoft products.
In March, Musk wrote on Twitter, "I still don't understand how a non-profit organization I donated about $100 million to turned into a for-profit organization with a market capital of $30 billion dollars." OpenAI has previously said it went from a nonprofit organization to a "hybrid of for-profit and nonprofit" because the computational demands of training advanced artificial intelligence systems meant OpenAI needed to raise more funds than a typical nonprofit organization.
In his interview with Carlson, Musk also said he was concerned that AI models would be trained to be "politically correct" and promised to create Truth GPT, which he said would be the ultimate truth-seeking AI.
In the past, Musk has often talked about the risks of advanced AI systems, and in an interview, he also said he is creating a new AI organization to prevent an "AI utopia". However, experts and researchers, including xAI consultant Hendricks, have expressed concern that adding another well-funded company to the AI ecosystem could further fuel the race to develop powerful AI systems at the expense of efforts to secure them.
In response to reports that Musk may start a new AI company, Hendricks wrote that "the emergence of major new AI developers could increase competitive pressures" and that "the desire to be first could lead to sacrifices by players, especially when it comes to trade-offs between security and competitiveness".
Musk also reiterated that his approach to creating safe AI will be based on the spirit of AI truth-seeking during a discussion with Congressmen Ro Khanna and Mike Gallagher in the Twitter space. Musk said: In terms of safety AI, AI with extreme curiosity, AI that seeks to understand the universe, will be pro-human. From the point of view that human is more interesting than no human.
Jess Whittlestone, head of AI policy at the UK-based think tank Center for Long-Term Resilience, told TIME via email that this is "an unorthodox (and, it seems to me, rather naive) approach to AI security. I'm not sure you can say what it means for AI to be "maximally curious," and it's a huge leap to assume that means AI will support humans. The whole problem is that we can't fully understand these AI models or predict their behavior, and I don't see Musk's approach as solving that problem".
Elon Musk has always been known for his radical and innovative thinking, which is often ground down by those who look at the world from a more practical perspective. We have the opportunity to look at this issue from different angles and draw our own conclusions and form our own attitude to these innovations. Time will show how it will affect the future of society.
Review
leave feedback