Demystifying generative AI - 4 major misconceptions

4 myths about generative artificial intelligence

AI is replacing the lawyer, AI writes essays so well that it fools professors, AI is putting artists out of work because anyone can design a magazine cover or write music for a movie. These examples have hit the headlines in recent months, especially statements about the impending obsolescence of intelligent professions and managers. However, AI is not an innovation in the sense that it has been around for a very long time. Since the mid-1950s, there have been successive waves of anxiety and fantasy, each time with the same prophecy: humans will be replaced forever by machines. And each time, those predictions have failed to come true. But this time, as we see the use of new AI multiplying, can we reasonably believe that things will be different?

Technology Revolution?

Talking about artificial intelligence conjures up a picture of the coming “future” in many people's imaginations. The news and media talk about a technological breakthrough happening right before our eyes. But is this really the case?

The algorithms used by ChatGPT or DALL-E are similar to those that have been known and used for years. If the innovation is not in the algorithms, then perhaps a major technological breakthrough will allow us to process large amounts of data in a more “intelligent” way? Not at all! The advances we are seeing are the result of relatively continuous and predictable progress. Even the much-discussed generative AI, i.e. the use of algorithms trained to generate many possible answers, is not new either - although improving results are making it increasingly usable.

What has happened over the past year is not a technological revolution at all, but a breakthrough in usage. Until now, AI giants have kept these technologies to themselves or released only limited versions, thus limiting their use to the general public. The newcomers (OpenAI, Stable.AI, and Midjourney), on the other hand, have decided to allow people to freely dispose of their algorithms. The real breakthrough lies precisely in making AI publicly available.

Big tech companies are technologically obsolete

As mentioned above, big companies like Google, Apple, and Meta are as good at owning these technologies as anyone else, but keep them in highly restricted access. They maintain very tight control over their AI for two reasons.

First, it's their image: if ChatGPT or DALL-E create racist, discriminatory or offensive content, the mistake will be justified because they are startups that are still in the learning process. This “right to make a mistake” does not extend to Google, whose reputation would be severely damaged (not to mention potential legal problems).

4 myth about AI

The second reason is strategic. Training and exercising AI algorithms is incredibly expensive (we're talking millions of dollars). These staggering costs benefit GAFAMs, which are already well established. Opening up access to their AI means giving up this competitive advantage. However, this situation will seem paradoxical when you consider that these same companies grew by liberating the use of technology (search engines, web platforms) while other established players of the time jealously guarded them under tight control. Beyond the scientific demonstration, one of the reasons Facebook made its Llama model available was precisely to put pressure on the biggest players. Now that this market is being explored by new players, the digital giants are rushing to offer their “ChatGPT” to the market (hence the new version of Microsoft Bing with Copilot and Google Gemini).

OpenAI is open source AI

Another myth that is important to dispel is the openness of new companies' AI. Indeed, the use of their technology is pretty wide open to promise. For example, ChatGPT's “GPT API” allows anyone (for a fee) to incorporate queries into algorithms. Others make the models themselves available, allowing them to be modified at will. However, despite this accessibility, AI remains closed: open or collaborative learning is out of the question here. Updates and new training are done exclusively by OpenAI and the companies that created them. Most of these updates and protocols are kept secret by the startups.

If neural network learning were open and collaborative, we would see battles (e.g., using “bots”) to influence the learning of the algorithm, which would negatively impact the performance of the system. Similarly, on Wikipedia, the collaborative encyclopedia, there have been attempts to influence what is presented as “collective truth” for many years. There is also the issue of the right to use data.

Shutting down AI seems very logical. But it actually raises a fundamental question about the credibility of content. The quality of information is uncertain. AI can be biased or partial, and poor training can lead to dangerous “behavior.” Since the general public is unable to assess these parameters, the success of AI depends on trust in companies - as is already the case with search engines and other “big tech” algorithms. Such “open” AI completely redefines ethics, responsibility and regulation. These pre-trained modules are easy to share and, unlike centralized AI platforms such as OpenAI's GPT, are virtually impossible to regulate. Typically, in the event of an error, we will be able to determine exactly which part of the training caused it? Was it the initial training or one of hundreds of subsequent training sessions? Could it be that the machine was trained by different people?

4 myth about AI

Many people will lose their jobs

Another myth associated with new AI concerns the issue of the impact on employment. Despite fears that generative AI will replace humans in a range of occupations, it is currently too early to think about such a prospect. No matter how effective AI may seem for solving everyday tasks and automating processes, it is not capable of replacing an expert or a specialist. ChatGPT or DALL-E can produce very good “drafts”, but they still need to be tested, selected and finalized by a human.

Also, we should not forget that the “creativity” of AI and its deep analysis abilities are a kind of illusion. Generative AI is not “Intelligence” in the literal sense of the word, but an algorithm that selects the most relevant answers. In reality, the intrinsic quality of the results is questionable. The explosion of information, content and activities that will result from the widespread and open use of AI will make human experience more necessary than ever. This is the rule of digital revolutions: the more we digitize, the more human experience is required.

Summary

  • There are many myths and tall tales surrounding AI, especially since the emergence of generative AI such as DALL-E.
  • In reality, these AIs do not represent a technological revolution in the sense of innovation, as their existence predates the emergence of ChatGPT.
  • First of all, we are witnessing a hiatus in usage, thanks to startups that have “opened” access to AI to the general public.
  • In reality, the training protocols of these AIs are kept secret by the companies, but the programming interfaces give users the illusion of owning the algorithm.
  • Despite concerns, this widespread and open use of AI will make human expertise more necessary than ever.

The emergence of generative AI has sparked much discussion and created many myths about the future of the technology and the impact on human endeavor. However, the reality is that AI is not a technological revolution, but the result of incremental progress and changes in the way we use already known algorithms. The secrecy of learning protocols, limited access to these technologies, and the illusion of openness all emphasize that humans remain a key element in the management and control of AI. Rather than replacing human expertise, the development of AI only reinforces its importance, making us important participants in this new digital age.

Review

leave feedback