In the world of artificial intelligence, things began to get hot. Chatgpt by Microsoft, Google Bard, while the future of many professions are questioned. Elon Musk, who was known as the inventor of the 21st century, did not fall behind the artificial intelligence (AI) investment. Musk, one of the founders of OpenAI, owner of Chatgpt, was later withdrawn from this project. In a new announcement on July 12, he announced the road map of his company called XAI. What is this XAI?
According to the information on the establishment page of the company; The aim of the XAI’s aim is to önemli Understanding the true nature of the universe ”. So how does XAI fill this? The main purpose of XAI, which was officially established on March 9, is to take programs such as chat GPT to the next level.
Thousands of graphics buys
The rumors about the establishment of the company began to spread with the emergence of thousands of graphic processors (GPU). When I was asked why Musk bought these devices several times, “Everyone even gets a GPU for his dog these days”.
Musk has been criticizing Open AI for a long time in 2015, which was one of the founders and left in 2018. In the focus of their criticism, there is a claim that these companies do not understand what artificial intelligence will turn into in the future ”. Musk last April Chat warned companies that a stronger artificial intelligence robot should not be made than the GPT-4.


It is not clear how Musk will come across with a model of artificial intelligence yet, but just like his claim on Twitter, he comes to the AI field to olan saving these places .. Director of Artificial Intelligence Security Center for Musk’s 12 -person team It is also noteworthy that Dan Hendrycks is.
According to the Financial Times, it will develop a language model for artificial intelligence using Twitter in Musk XAI. Twitter’s data set is an amazing treasure to give an artificial intelligence ability to speak because it serves in almost every language that can come to mind.
In a tweet, one of the founders of Musk’s new artificial intelligence project, he said that they will put artificial intelligence in a new stage in Deep Learning processes.
Musk in mixed feelings
Elon Musk is in mixed feelings about artificial intelligence. He says he sometimes sees it as a threat to humanity, and sometimes Tesla’s future is in AI. Musk emphasizes that the chat boots, which are on the market today, are too political correct, they do not always like their answers. Let us remind you that Musk continued to donate to the company despite his departure from Open AI in 2018.
Musk and his team have already started collecting good engineers for XAI. The company says that it can respond to applications within 3 weeks for the time being. In artificial intelligence terminology, XAI has a equivalent. (Expleinable artifier intelligence), that is, explanatory artificial intelligence can be translated into Turkish. According to IBM; Explained artificial intelligence aims to rely on the results of the algorithm and want to break the “prejudices ın of previous artificial intelligence practices. At this point, it is useful to remind that descriptive artificial intelligence can be adapted separately according to different types of sectors.The chat boots we use today are programmed to respond to each subject, XAI is talking about artificial intelligence customized in different fields.
He also has an effort to explain the outputs of XAI to people. This issue of explanation is actually forcing the concept of “black box olan, the basis of artificial intelligence. Today, a question asked about artificial intelligence is transmitted to the heart of artificial intelligence called “Black Box”. This black box comes to a conclusion and this result is shared with the user, and XAI is expected to make this black box transparent.
The concept of transferring the reasons for the decisions of a machine to the user goes up to the 1970s. Today, many academics think that the development of XAI is one of the major obstacles to AI’s development.
Sources: The Verge, Financial Times, IBM, Vilone, Giulia; Longo, Luca (2021) “Notions of Explainability and Evaluation Approaches for Explainable Artificial Intelligigece”