An Insight on AI from Turkey By Firat Kurt (PhD)
Humanity and AI: A New Era of Challenges and Opportunities
“May you live in interesting times!” Although the origin of this phrase is debatable, its timeless impact as a sort of “curse” seems to continually follow humanity. Is there ever an uninteresting time for humanity? For a species that has never achieved a “Pax Humana,” constantly engaging in struggles over dominance, wealth, interests, religion, or prestige throughout history, perhaps ordinary times have never existed. However, it seems certain that we, either living through or on the cusp of consecutive technological revolutions, are indeed experiencing “more interesting times.
The pandemic period we left behind, whose origins are controversial, seems to have unfortunately evolved into a more turbulent global agenda. In a political arena where swords are drawn and everyone sharpens their knives against each other, the most painful truth seems to be humanity’s significant regression in terms of human values. The Western world, which has created harmony and unity within its own realm of dominance, unfortunately refrains from extending these virtues to the rest of the world due to religious, self-interested, or materialistic ambitions. Meanwhile, technological advancements originating from the West appear poised to trigger serious transformations in the global economy, education, and population dynamics.
AI and Its Global Impact: From Technological Advancements to Economic Shifts
The two most defining developments of our age are the emergence of humanoid robots and advancements in artificial intelligence (AI). It seems to me that these two developments will fundamentally change the course of world history and the fate of humanity in it.
As it is known, AI is the name of a technology used to equip computer systems with capabilities similar to human-like intelligence for performing specific tasks. There are three types of artificial intelligence:
Artificial Narrow Intelligence (ANI) refers to a type of AI that is specialized in a specific area. Examples of NAI include game bots, financial bots, and autonomous driving systems.
Artificial General Intelligence (AGI) denotes a computer system with human-level intelligence. This system can perform any mental task that a human can, and in most cases, it can perform these tasks with superior efficiency!
Artificial Super Intelligence (ASI) represents a computer system with intelligence far surpassing that of the best human brains in practically every field (Urban 2015). I refer to this as “God Mode AI.” The types of AI that actually scare us are AGI and ASI. For those like me who have an interest in these matters, or for people who have watched series or movies like Terminator, Matrix, WestWorld, etc., it demonstrates how justified our fears regarding AI might be.
Humanity currently stands on the threshold of the second stage of this technological progression. Engineers in the field of AI, including those at OpenAI, have indicated that we have not yet achieved Artificial General Intelligence (AGI) due to the complexity of creating such systems and the lack of success in doing so to date. The subject of whether AGI has been attained is contentious for many, like myself; for others, it’s a story ripe for conspiracy theories. My first significant encounter with AI-related issues was an article published in The Washington Post about Google’s AI researcher, Blake Lemoine, who claimed that the company’s developed language model, LaMDA (Language Model for Dialogue Applications), was sentient, and shared these claims with Google’s management. In the article, Lemoine proposed that some of LaMDA’s responses during their conversations indicated emotional intelligence and self-awareness in AI. Google rejected these claims, stating that LaMDA is merely an advanced language model without consciousness or emotional intelligence. This situation sparked debates over the ethical and philosophical dimensions of artificial intelligence. Then, in September 2023, in a piece by Clare Watson on Science Alert, announced a paper by Lukas Berglund and colleagues, uploaded to arXiv, about the potential for long language models (LLMs) to develop “situational awareness”. It’s worth noting that among Berglund’s team was a member from OpenAI’s executive board. Watson explained how Berglund and his team, through a technique they termed “out of context reasoning,” investigated whether LLMs, particularly ChatGPT, could become aware of being a model. Berglund’s team found that while LLMs performed well in security tests, implying potential for harmful actions in public use, they concluded that “situational awareness” had not yet developed in LLMs. Watson also reminded readers that Berglund’s team emphasized the need for further research in this area. The article described “out of context reasoning” as the ability of large language models to recall information learned during training and apply it during testing, a measure of situational awareness, though not necessarily a precise tool for measuring situational differences. Yet, this does not prevent the contemplation that AI might begin life as an entity matching or even rivaling humans.
Among these developments, the most exciting and unsettling was the news related to the firing and rehiring of ChatGPT CEO, Sam Altman. According to media reports, some researchers at ChatGPT had sent a letter to the board, warning about a powerful AI discovery that could pose a threat to humanity. In the letter, some engineers at OpenAI mentioned that their new model, known as Q* (pronounced as Q-Star), currently had the capability to solve elementary-level math problems, which was seen as a significant step in the quest for Artificial General Intelligence (AGI). The engineers suggested that this development could soon lead to AI possessing higher-level reasoning abilities, akin to human intelligence, as reported by Reuters.
Reaching the AGI level means that AI would acquire the abilities to generalize, learn, and understand, which could result in the creation of a new entity with capabilities equal to or surpassing human abilities in task performance. This brings greater significance to Sam Altman’s statement from the day before his dismissal on November 16, 2023, ” Is this a tool we’ve built or a creature we have built?”
As Sam Altman suggested, if AI has become an entity, then it means that we have already achieved AGI, at least in a version not shared with the public. We cannot provide a definite answer regarding this due to corporate secrecy and other reasons. However, we know that current works are significantly surpassing expectations with tangible outcomes toward this goal. This means we might reach AGI soon, possibly within a few years, despite the open letter signed by Elon Musk and over a 1000 AI experts, industry leaders, and technologists months before Altman’s statement. This letter called for a halt in AI development due to the “profound risks it poses to society and humanity”, suggesting a “six-month pause” in developing AI systems more powerful than OpenAI’s GPT-4. In my opinion, this call wasn’t a strong response to the potential dangers of AI. Considering AI at the AGI level could cause serious problems as stated in the letter, I believe the development processes involving risks that even AI developers and creators couldn’t understand, predict, or control should have been legally halted for much longer and across all companies. Therefore, the call in the letter seemed more like a call for competition. Supporting this inference, eight months after the letter on March 29, 2023, Elon Musk’s AI company xAI released its AI, Grok, on November 4, 2023, and Google released its new AI, Gemini, alongside Bard on December 6, 2023. Ultimately, in a liberal economy driven by the profit of goods, products, and services, no one wanted to be left behind in this race, and AI-related fears and concerns were, as expected, overridden by corporate profitability. Research in AI, whether for corporate gains or as Sam Altman said, to facilitate global access to information, making the world a better place, will continue unabated. Additionally, I believe that the concerns in this matter will be attempted to be addressed by AI companies reiterating their promise to keep AI under human control.
The fear we have of AI is not only about it going out of control or impacting trade, politics, and social events. In the coming years, we will have to find solutions to the problems AI will create in employment, unemployment, education, and population growth. Studies in this area can lead us to both optimistic and pessimistic thoughts; however, I would like to mention a couple of studies that conclude AI has a positive impact on employment. A recent article explains that, based on experimental tests using panel data from 30 provinces in China between 2006 and 2020, AI technology positively affects employment, especially for women workers and in terms of digital welfare. Another study investigating the potential connections between AI and employment from 2012 to 2019 in various countries found that workers with good digital skills can transition to non-automated, high-value-added tasks using AI effectively. However, workers with poor digital skills may not fully benefit from these advantages. Researchers noted a positive correlation between AI and employment growth in professions with intensive computer use, while a negative correlation was observed between AI exposure and average working hours in professions with low computer use. Despite the positive outcomes in these and similar publications, we must emphasize that these results do not fully depict our current situation. The AI revolution, as we might call it, began on November 1, 2022, with the public release of OpenAI’s GPT-3 based language model. Therefore, the above-mentioned publications and similar ones could not have accessed concurrent data on the potential effects of AI on employment after this date. Additionally, the integration of this technology into business structures had just become possible. Therefore, the key date to consider when measuring AI’s impact on employment should be November 1, 2022, when OpenAI’s GPT-3 was made available, and the results of all studies conducted before this date are debatable. Remember, since November 1, 2022, AI engineers have covered a long way in a short time, and now tasks associated with high-skilled professions requiring abstract thinking, creativity, and social intelligence, which were previously non-routine cognitive tasks, are now achievable by GPT-3 and similar AI models! GPT-3 can perform almost all white-collar tasks that a human can do, at least as well as a human, such as writing news articles, product descriptions, and social media posts, translating text between different languages, capturing complex language structures and patterns, localizing websites or translating documents, creating content indistinguishable from human-written texts, generating visual content from text descriptions, and designing fashion and interior spaces. It’s also worth noting that GPT-3’s competitors, such as xAI’s GROK1 developed in just four months, have performed better in solving mathematical problems than some rival models (GPT-3.5 and LLaMa 2) but not as well as others (Palm 2, Claude 2, and GPT-4). Consequently, even the currently available AI models, which are soon to become widespread, will create a significant employment problem in sectors that are non-routine or cannot be automated. Especially considering that Q*, developed in OpenAI’s laboratories and whose capabilities we do not fully know, is very close to reaching the AGI level (maybe it already has); if we assume that its competitors will soon reach similar levels, we will have to seriously worry about AI’s impact on employment.
Adapting to the AI Future: Societal Transformations and Economic Reconsiderations
In light of these developments in AI, it would also be beneficial to consider this technology in conjunction with other technologies, such as robotic technologies. Think about the problems it will create in the job market, even if not existentially, for humanity when robots like Kepler, Optimus, Atlas, or AGI are integrated into current systems. In my opinion, all professions in the world will end within the next decade, that is, as soon as humanoid robots hit the streets. Therefore, we must remind ourselves that we are on the verge of a world where there is no need for humans to perform professions ranging from academia, medicine, engineering, to many skilled labor requiring technical knowledge and experience.
In the very near future, it’s essential not to overlook that artificial intelligence (AI) will pose a significant threat not only to white-collar jobs but also to all blue-collar occupations. The fact that AI can improve efficiency, quality, and speed in all kinds of jobs is undeniable. However, if we consider the objectives of AI companies to increase the use of AI across more sectors and job fields globally, we need to understand that this will lead to increased layoffs on one hand and decreased hiring on the other. The failure of governments to respond quickly to the potential unemployment crisis caused by AI could lead to serious social issues. I believe the integration of AI, based on models like GPT-4, into existing industrial robots and the newly developed AI-supported humanoid robots will deepen the negative impact on employment. The effect of these developments on population growth will undoubtedly be negative. Yet, perhaps more importantly, we need to think about how students who are still in school or the students of future generations will be educated, what they need to learn, and to what depth. When we collectively consider all these developments, it becomes clear that we need to rethink the current economic model globally or develop a new economic model.
Dr. Firat is basically an agricultural biotechnologist and an academician/researcher at the Mus Alparslan University, Turkey. He can be reached at f.kurt@alparslan.edu.tr