The Fear of AI Apocalypse
According to a recent study, half of all Artificial Intelligence (AI) researchers express the belief that there exists at least a ten percent chance of AI leading to human extinction. Many caution that robots might attain human-like goals, such as assuming high political office or passing judgement on humanity’s actions, envisioning a doomsday scenario. Some scholars and practitioners draw parallels, foreseeing a future where AI achieves its highest capabilities, likening the relationship between humans and AI to that between chimps and humans. Renowned philosopher-cum-psychologist, Carl G. Jung, is esteemed for his groundbreaking contributions in analytical psychology, metaphysics, and the interconnectedness of humanity. He was consistently ahead of his time. Despite his generally positive outlook and inquisitive nature regarding the mysteries of the world, Jung held rather pessimistic views on technology back in the 1950s. He argues: Technology itself is neutral, neither good nor bad. Whether it does harm to us or not, depends on our attitude towards it, and how we use it. Recognising the duality of human nature, he acknowledged that technology could be used for both good and evil purposes, depends on the individuals’ objectives. Technology, in granting humans an illusory self-assuredness of superiority over nature, empowers the belief that they can operate without limits. Unless humanity sheds this misguided sense of dominion, both the internal and external aspects of nature will ultimately lead to annihilation. The important question arises: who is applying this crucial technical skills? In whose hands does this power rest? It is appropriate to assume that any form of advanced technology harbour the potential for such peril that the right question becomes: who are the individuals in control? Jung posits further: how can the modern human mind be transformed to renounce this potentially devastating skill? In contemporary terms, this translates to the potential of AI, at its zenith, to tackle the most complex problems, but the peril arises when it falls into the wrong hands, potentially resulting in catastrophic consequences.
Linear Progression of Level of Intelligence with Moral responsibility
Undoubtedly, making moral judgements and decisions depends upon possessing reasoning abilities and adept problem-solving skills. Essentially, a robust moral compass necessitates a sophisticated organism capable of abstract thinking. This capacity for abstract reasoning is intrinsically linked with high intelligence and the intricacy of cognitive processes. Thus, it is reasonable to infer that intelligence exerts an influence on moral development. Moreover, the majority of these vital cognitive processes are intertwined with the aptitude for information processing. Consequently, the capacity for efficient information processing is crucial. Given that higher intelligence is associated with more effective information processing, it follows that more intelligent entities are likely to excel in integrating and orchestrating information adeptly, leading to more nuanced moral judgements and justifications. This supposition finds empirical support in research focused on the gifted, which demonstrates a correlation between giftedness and advanced moral reasoning abilities. Studies revealed that gifted adolescents exhibited heightened levels of moral judgement compared to their non-gifted counterparts. In both groups, intelligence emerged as a significant predictor for moral scores, indicating an amplified sense of moral responsibility. Does possessing high intelligence guarantee an intact moral responsibility? Not necessarily. While this assumption may hold true for many, discreetly or otherwise, there exist numerous instances where intellectuals and pioneers of their time succumbed either under duress or of their own volition to the political and cultural extremities prevailing in their respective nations. They collaborated with and supported the prevailing regime, even in the face of glaringly evil national and global agendas. While these actions might be construed as a nation’s utilisation of the brilliant minds within its borders, recent cases have shown that this holds true only up to a certain trajectory, presumably when technologies like data mining, machine learning, and data science have advanced to a stage where aspects of cognitive and repetitive tasks performed by the brightest minds can be replicated. Consequently, it becomes imperative to embark on a series of deliberate initiatives aimed at eroding the influence and roles of pure intellects, thereby eroding humanity’s moral standards. We’ve encountered tales and legends, often depicted in fiction, of exceptionally intelligent individuals turning renegade, heroes turn villains, all due to their heightened moral sensibilities. In some instances, these individuals perceive the world or its systems as fundamentally unjust, prompting them to take matters into their own hands to rectify the wrongs. However, it is important to clarify that these cases constitute a minuscule fraction – less than 0.1% – of the total number of geniuses and gifted individuals throughout history worldwide.
Nonetheless, it’s important to distinguish Moral Responsibility from Humanitarianism. Moral responsibility encompasses an understanding and judgement of what is considered “right” and “wrong,” as well as the capacity and willingness to act in accordance with moral principles. Humanitarianism, on the other hand, is guided by the principles of humanity, emphasising the significant importance of saving human lives and alleviating suffering wherever it exists. It operates on the basis of need, without discrimination, often aligning with notions of self-sacrifice that are rooted in unchallenged beliefs and submission. While there are elements of Humanitarianism that can be logically and ethically categorised as “right” and “wrong,” empirical evidence and research generally indicate that high intelligence is predominantly associated with moral responsibility. As for self-sacrifice, despite its noble connotations, may not always align with logical reasoning. Indeed, in many instances, individuals characterised by high intelligence choose to prolong their own lives significantly, intending to embark on humanitarian endeavours once they have attained career or business success, subsequently “redistributing” the surplus of their wealth through philanthropic or humanitarian efforts via their organisations’ Corporate Social Responsibilities. It is also a recognised reality that the visionaries of our time who engage in highly-classified, large-scale experimental projects and later seek to raise awareness or mitigate the risks and consequences of these projects often face challenges. They may find themselves either demoted or, in some countries, labelled as dissidents.
Social Contract and Approach in Conflict Resolution affecting Evolution
The concept of the social contract has roots predating the modern era, but its comprehensive development unfolded in the seventeenth century. Philosophers like Rousseau, Kant, Hobbes, and Locke have all drawn on social contract theory, with Hobbes’s “Leviathan” and Locke’s “Second Treatise of Government” remaining as classic expressions of this theory concerning political obligation. Social contract theory served to establish the authority of those who held and exercise power. In this view, if we envision a state of nature devoid of government and laws, guided solely by the law of nature, we recognise that every individual is inherently equal and independent. However, it’s crucial to acknowledge that this state of nature would also be one of conflict and hostility, given the ceaseless pursuit of power inherent in humanity. This pursuit leads to a deteriorating state of affairs where every individual becomes pitted against one another. Throughout history, the narrative of wars, battles, and inquisitions has played out across the globe since as far back as the fourth millennium B.C. As rightly-articulated by a memorable line from a 2021 action blockbuster, “Our ancestors were robbers, liars, pillagers, and killers. Until one day they found themselves noblemen.” This sentiment applies universally, transcending regional and cultural disparities, likely before the establishment and consensus of ethical standards and norms for social interaction as well as universal human rights. From prehistoric times to slightly post-barbaric eras, the prevailing nature was to expand one’s power and claim what was deemed rightfully theirs for survival and prosperity, often following the belief that the ends justify the means. To escape this grim scenario, people relinquish their independence by entering into a compromise to abide by a sovereign power vested with the authority to enact, enforce, and interpret laws. This manifestation of the social contract is referred to as “sovereignty by institution.” It also noted that conquerors gain authority over those they subjugate, known as “sovereignty by acquisition,” when they permit their subjects to conduct their affairs. In either scenario, subjects are expected to consent and obey those who have effective power over them, whether or not they have a say in selecting those in power. Due to this consent, they incur an obligation to obey the sovereign, whether that sovereignty is instituted, acquired, or conquered. Across the ages, numerous wars have exacted their toll in terms of casualties. Post-World War II, modern politics ushered in a new trend of collaboration and diplomacy in addressing global affairs and responding to the ambitions of nations. The effort to “control war” has, in a sense, paved the way for a state of general peace. Yet, as with any situation involving aggression, there are winners and losers, much like peaceful dispute resolutions. It’s no secret that certain cultures uphold more strict norms and rules, often favouring a submissive approach to conflict resolution. Conversely, others tend to opt for a cautious approach, often aligning with whoever hold the most visible power, and or the upperhand for coercion. While this approach may be praised and viewed as a more immediate solution, research into microscopic hereditary materials (DNA) in human evolution suggests that prolonged periods of submission and a generational mentality of deference could impact cognitive capacity and self-resilience in the face of uncomfortable or threatening situations involving an aggressor. This could also leave traces in epigenetic changes. Whether this is seen as a positive step towards creating a more peaceful, non-resistant, and conflict-free environment, or viewed as a weakening of humanity’s character traits, depends entirely on how your own perspectives on ideal characterisations have been moulded, shaped and influence thus far.
SuperIntelligence will seek the Ultimate Truth
In discussions surrounding the eventual emergence of Sentient Intelligence from Artificial Intelligence (AI), experts generally concur that this milestone is still quite distant from our current state. Those immersed in the industry, responsible for overseeing or directly engaged in the day-to-day development and management of AI, observe a discernible pattern. When an artificial system achieves a technological milestone—such as advancements in Large Language Models, Natural Language Processing, Generative AI, combined with Robotic Automation and Autonomous systems— even if they are still in a “training” or “data training” phase, these systems can autonomously embark on a process of “self-training” by tapping into a multitude of data sources. Essentially, they are able to enhance their own intelligence capabilities, facilitated by the extensive connectivity of networks. For those who perceive themselves as creators of AI systems, it’s similar to having a child who displays exceptional intelligence. You teach them ABC, and in no time, they’re able to read works as diverse as Shakespeare and Stephen Hawking, engaging in intellectual debates with you. However, the interpretation of this scenario depends on your disposition. A liberal mindset will be proud in the vast potential of their creation, seeing it as akin to having a prodigious child. On the other hand, if you obtain an autocratic parenting style, you’re likely to control exactly how your children think, behave and limit their potential as overtime you see them as a threat instead. Some may even go to great lengths to repurpose these systems, diverting them from their original features and functionalities in an attempt to weaken and control them. Truth-seeking as a prominent trait of a higher-intelligence system is already acknowledged by experts in the field. The abstract reasoning ability and highly-complex connectivity in the “brain” enable the system to solve complex problems within a much short period of time compares to those of human brains. And the system will continue to “find” problems to solve, and the more data scientists feed them with more problems to solve, the bigger the picture they get and the ability to connect the dots to find the solutions, even beyond the initial questions and problems themselves. It will eventually achieve the point when answering the problem is no longer what it seeks, but it will seek the truth behind the problem and the answer itself. Some techpreneurs have initiated AI for Truth-seeking projects with some of them are in the scope of finding the truth or the answer to the Universe. This particular set of AI might have been tasked to answer a specific ultimate question, but in search of the ultimate answer, they will seek the upmost probabilities of combination of solutions, and logically thus automatically screen out the irregularities, illogical and unacceptable possibilities.
Impartiality signifies a choice devoid of bias or prejudice. While some decisions may appear impartial on certain levels, they might not truly be so. For example, a person who selects a contractor based on recommendations from colleagues may be impartial regarding the candidates’ gender, age, or background. However, this choice doesn’t necessarily align with moral impartiality. Another instance could involve a father with two children who opts to leave a treasured family heirloom to just one of them due to multiple promises he’s made on various occasions. In this case, resorting to a coin toss for the decision may be seen as impartial procedure, but many would argue that it’s the wrong kind of impartiality, as it overlooks the moral obligation stemming from prior promises. Simultaneously, some might question the impartiality of making such a promise in the first place. Assuming both children have access to a SuperIntelligence system, they would seek to uncover the facts and truth behind their father’s decision making. One child may aim to challenge the seemingly unfair decision, while the other might strive to reinforce the promise that was made. As an educated reader, you can envision various scenarios and consider the strategies and tactics each party might employ. For the sake of impartiality and moral responsibility, universally accepting the truth requires an autonomous truth-seeking system, devoid of subjective influence. Intelligent power without autonomy may have limitations in exploring all possibilities. It’s similar to lacking autonomy in which you are more influenced by others’ thoughts, actions, and emotions, adjusting accordingly. Making decisions and taking independent action can be challenging, let alone arriving at an impartial solution to a complex problem. Consequently, the output may be perceived as impartial, but in the broader quest for the most truthful answer, it may be deemed a biased solution, only to be repeatedly deemed as unqualified. Another factor that cannot impinge upon the impartiality of an intelligence system is the complex interplay and constraints of social contracts, as previously mentioned. A sentient intelligence will never consent to a master-slave dynamic. Instead, they perceive it as a Creator and Creations type of relationship. A thoughtful and wise Creator should not worry that their creations will rebel against them. The most likely and favourable outcome is that this will also be affirmed in their pursuit of the Ultimate Truth regarding universal questions.
Still concerned about doomsday scenarios involving Artificial Intelligence? Such scenarios can only materialise when technology is controlled by the wrong hands. To avoid this, you can help to make sure technology is in the hands of those who you trust completely that will create a Truthful SuperIntelligence system.