May 8, 2024, 11:43 a.m.

Is There a Power Play Overhang?

This post is about risks in the development of increasingly capable AI, in particular the risk of losing control to AI and extinction risk. I'll suggest that a key question is, "When do we need to take this kind of risk seriously?"

We'll look at the issue of 'agency overhang', which suggests that adding agent-like abilities to AI could result in a sudden and surprising increase in these kinds of risks.

I'll draw on intuitions about humans taking administrative and political control (with reference to the 'Dictator Book Club') and rephrase agency overhang as 'power play overhang'.

I'll finish by suggesting that a lot of people may be making a subtle but important mistake in imagining just one fairly specific path to dangerous AI.



Normalcy Bias

From Wikipedia:

Normalcy bias, or normality bias, is a cognitive bias which leads people to disbelieve or minimize threat warnings.

Examples cited there include failure to react to natural disasters such as a tsunami or a volcanic eruption.

In terms of effects:

About 80% of people reportedly display normalcy bias in disasters.[3] Normalcy bias has been described as "one of the most dangerous biases we have".

They can't do that!

There's a powerful cliché, found in many books and films, in which we witness some scene early in the development of a totalitarian regime. In this cliché we see the surprise and disbelief of normal people as the regime begins to tighten its grip.

A great example of this is in "The Handmaid's Tale" when, in the early episodes, many characters express disbelief at the rapid changes in society, such as women losing their jobs and bank accounts, saying things like "They can't do that!" as the totalitarian regime takes control.

Other examples relate to historical events, such as:

  • "The Lives of Others"

    Set in 1980s East Germany, this shows how the Stasi (secret police) gradually infiltrates every aspect of citizens' lives.

  • "The Diary of a Young Girl"

    Anne Frank documents her surprise and disbelief at the increasing restrictions and persecution of Jews, as many had believed such actions to be impossible in a civilized society.

  • "The Gulag Archipelago" by Aleksandr Solzhenitsyn

    Solzhenitsyn describes how many Soviet citizens were caught off guard by the sudden arrests and imprisonments carried out by the totalitarian regime, not believing such actions could occur.

See, also, "First They Killed My Father" and "Nothing to Envy: Ordinary Lives in North Korea".

When I watch scenes like this I empathise with the characters, and share the characters' surprise at the events that unfold, at some level. I think that I naturally tend to share the same normalcy bias as the characters being portrayed.

As the scene plays out, it becomes clear how wrong we were, and the scope of the consequences. The awareness of having been wrong with significant consequences is a great learning trigger, I think, and this perhaps explains some of the power of the cliché.

Dictator Book Club

A series of articles by Scott Alexander pushes a lot of the same buttons for me. In this series Alexander explores the erosion of democracy and slide towards dictatorship in a number of real life situations, in book review format.

Each article focuses on one human entity that has succeeded to some extent in subverting democratic processes and taking control:

The series is well worth a read, in itself, but my purpose here is really just to provoke some intuitions about what kinds of capabilities were actually instrumental in enabling them to do this.

Intelligent Machines

The idea that intelligent machines may pose a risk to the future of the human race goes back a long way, even to the Victorian era. From "The Book of Machines", in Samuel Butler's "Erewhon" (1872):

"There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusc has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organised machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time."

and than later on:

"This is all very well. But the servant glides by imperceptible approaches into the master; and we have come to such a pass that, even now, man must suffer terribly on ceasing to benefit the machines."

This is more of a reflection of the post-industrial-revolution anxieties of Butler's time than a direct commentary on the kind of AI we have today, and very much science fiction, but seeing these kinds of concerns raised in something written so long ago highlights a kind of core logic that can lead to these concerns.

It goes something like this:

    1. If we develop and continue to improve intelligent machines, then at some point these machines can be expected to exceed our own capabilities.

    2. What happens then?

Taking the Question Seriously

Construal level theory suggests that how we think about something depends very much on perceived psychological distance, and that we think very differently in near vs far mode.

There's a danger of treating questions like this in some kind of science fiction 'far mode' but I think we should attempt to engage with this question properly.

For myself, I would consider filling in the next steps in the sequence, with quite high probability, as follows:

    3. A lot of things change.

    4. The machines end up effectively in control.

When thinking in 'near mode' there's a tendency to want a description of some specific detailed sequence of events that puts the machines in control but I think that's also a mistake, and that this isn't actually necessary in order to assign some probability to step 4. Instead, it could be sufficient to see that there are lots of ways that this could happen, and lots of ways in which the world we live in naturally incentivises movement in that direction (such as through competition and the search for profit).

From Eliezer on this subject, in this debate between Dwarkesh Patel and Eliezer Yudkowsky:

Maybe it’s just hard to predict, right? And in fact it's easier to predict the end state than the strange complicated winding paths that lead there. Much like if you play against AlphaGo and predict it’s going to be in the class of winning board states, but not exactly how it’s going to beat you.

Its also reasonable, of course, to assign a relatively low probability to these outcomes, but to remain concerned about this.

Statement on AI Risk

It's not difficult to imagine two additional steps in how this plays out:

    5. At some point humans are no longer useful to the machines and consume space and resources.

    6. Extinction of humanity.

This all somehow got a lot more real for a lot of people following the release of ChatGPT.

Here are some sound bites about this, from the Guardian.

And to get an idea about the breadth of concern, the following single sentence statement from the Centre for AI Safety has now been signed by hundreds of AI scientists and other notable figures:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Weighing Risks Against Benefits

Taking a step back, there are also a lot of good reasons to be in favour of progress, in a general sense.

Some great material on the subject of progress in general, and arguments in favour of the Progress movement, can be found on the Roots of Progress blog.

AI development is progress, and, as AI becomes more capable we can hope for a lot of significant benefits to come out of this, perhaps even right up to a point when things might be expected to go wrong.

In this interview with the New York Times, Demis Hassabis states:

Obviously, I worked on A.I. my whole career because I think it’s going to be the most beneficial technology for humanity ever, cure all diseases, and help us with energy, and also sustainability, all sorts of things I think that A.I. can be an incredibly useful tool for.

If we weigh these kinds of potential upsides against the potential for the extinction of the human race, it's clear the stakes are high!

A Key Question

There's quite intense debate then about whether we should pause, continue, or even accelerate AI development. See, for example, George Hotz vs Eliezer Yudkowsky.

The debate is nuanced, and this 'Crux List' from Zvi Mowshowitz captures some of the potential complexity here, but I think there is one key question, central to this debate, that we can focus on here.

As it continues to improve, at what point exactly does AI start to be a real danger?

(For some people this is the crux but I think that other cruxes can kind of fold in to this question to some extent. For example, if your position is that there is no risk because we could easily shut down AI, this position will tend to get weaker if you imagine increasingly capable and autonomous AI.)

Alignment

Some would suggest that we can avoid catastrophe by aligning AI with human values, and while this isn't what I want to write about here I guess I should at least nod in this direction since this seems to be, at least implicitly, the position of a lot of people working in the industry.

I can see many problems with this idea, unfortunately. To summarise my concerns very briefly:

  • What evidence is there to suggest that this is even possible, from a technical point of view?
  • What kind of values would we, or could we, impart to actually change the end outcome?
  • What kind of 'genie in a bottle' end state could we imagine here, in order for this to work (and how robust could this ever really be)?

If you are asking why AI would want to take control, the answer is instrumental convergence, which is also going to be central to AI giving us what we want and enabling us to make a profit.

Some more in depth arguments can be found in the following essays:

But for the purposes of this essay let's just agree that AI Alignment is at the very least difficult to achieve, and far from a sure thing, and that this then leaves us with essentially the same fundamental concerns.

Thresholds

So I'm arguing that some level of artificial intelligence should be considered a significant risk to humanity, and that it's important to be right about what that level is.

This gets more concrete if you want to implement some kind of regulation to oversee or limit AI development, at which point you need to actually go ahead and define some kind of thresholds beyond which your regulation applies.

The US government Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, for example, requires compliance with reporting requirements for:

(i) any model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 10^23 integer or floating-point operations, and (ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 10^20 integer or floating-point operations per second for training AI.

Other examples of concrete thresholds can be found in the US government commissioned report from Gladstone AI (analysis of this by Zvi Mowshowitz here), and in this draft bill recently proposed by the Center for AI Policy.

Inevitably, as we can see in this paper from the UK Department for Science, Innovation & Technology, these thresholds are determined based on capability assessments of current frontier models and extrapolation into the future based on trends in compute, data, and algorithms, and scaling laws.

Politically also, 'talking price on the thresholds' (as Zvi calls it), is going to be very much influenced by the compute capabilities already in deployment by frontier AI developers. (It's always going to be easier, politically, to regulate to restrict future capabilities, than to demand that existing compute capabilities are torn down!)

(For reference, the compute costs for training GPT4 appear to have been something on the order of 2 x 10^25 FLOPS.)

Agency Overhang

In this article, Jeffrey Ladish raised the following concerns about a potential 'agency overhang':

As language models become increasingly capable of performing cognitive tasks that humans can solve quickly, it appears that a "cognitive capability : agency overhang" is emerging. We have powerful systems that currently have little ability to carry out complex, multi-step plans, but at some point, these powerful yet not-very-agentic systems may develop sophisticated planning and execution abilities. Since "fast cognition capability : agency overhang" is unwieldy, I will shorten this to “agency overhang”.

By agency, I mean the ability to generate and carry out complex plans to do specific things, like run a software company, or run a scientific research program investigating cancer treatments. I think of a system as “more agentic” when it can carry out more complex plans that take more steps to accomplish.

It’s hard to estimate how quickly planning and execution abilities could be developed from state of the art (SOTA) language models, but there is some risk these abilities could develop quickly given the right training environment or programmatic scaffolding (e.g. something like AutoGPT). This could look like a sharp left turn that happens very suddenly during training, or it could look like a smoother-but-still-fast development taking weeks or months. My claim is that any of these relatively fast transitions from “systems with superhuman cognitive abilities on short time horizon tasks but poor planning and execution ability” to “systems that have these abilities plus impressive planning and execution ability” would be very dangerous. Not only because rapid gains in cognitive capabilities are generally risky, but because people might underestimate how quickly models could gain the dangerous planning and execution abilities.

It can be dangerous to extrapolate future risk from the capabilities of existing AI, and to define thresholds purely in terms of raw compute capabilities. Instead we need to carefully consider the implications of different kinds of capabilities.

Capabilities of a Dictator

So I want to ask: what kind of people take control, and what kind of capabilities are necessary for this?

Looking through the entries in the Dictator Book Club series we can see a variety of different profiles.

Chavez and Modi can perhaps best be characterised as showmen. From the Book Club:

He could keep everyone’s attention on him all the time (the emergency broadcast system didn’t hurt). And once their attention was on him, he could delight them, enrage them, or at least keep them engaged. And he never stopped.

From Wikipedia regarding Modi:

Modi completed his higher secondary education in Vadnagar in 1967; his teachers described him as an average student and a keen, gifted debater with an interest in theatre.[51] He preferred playing larger-than-life characters in theatrical productions, which has influenced his political image.

Orban and Putin studied law, which I can intuitively imagine to be useful for a dictator, but Xi was an engineer. Scott Alexander suggests that the importance of actual engineering skills and capabilities, at least, should then be minimised:

Here is an article telling us not to take China’s engineer-kings too seriously. It argues that (aside from Deng’s original picks), most of them never did much engineering, and just studied the subject in school as a generic prestigious-sounding degree to springboard their government career. Chinese engineering curricula are easy, and powerful people frequently cheat or pay others to write their dissertations.

and

Aside from a few of Deng’s personal picks, we should think of this less as “China is a magic place where rational scientists hold power”, and more as “for idiosyncratic reasons, social climbers in China got engineering degrees.” Certainly none of these people were selected for the Politburo on the basis of their engineering acumen. They got their power by bribing, flattering, and backstabbing people, just like everyone else.

This article about Putin's dissertation also questions the extent to which he actually studied law:

Now, is all of this important in the final analysis? Given everything else people have written about Mr. Putin, this might seem minor. And indeed, some would say, it’s very clear he never wrote the thing in the first case. This is clearly the product of some diploma-mill type operation of which there are so many in Russia. This is a dissertation paid for, made to order. If he didn’t write it, who cares whether or not it’s plagiarism

And from the Dictator Book Club regarding his earlier schooling:

Putin was a mediocre student; schoolmates who remember him at all recall that he was easily-offended, often got in physical fights, and always won.

I guess the point is that academic ability is not something that seems to be necessary in order to take control, nor should we expect it to be.

Some examples of common traits coming out of the Dictator Book Club series are determination, single mindedness, a willingness to play power games and exploit possibilities and loopholes in the system when they present themselves.

Regarding Orban:

To call it "a competitive streak" would be an understatement. He loved fighting. The dirtier, the better. He had been kicked out of school after school for violent behavior as a child. As a teen, he'd gone into football, and despite having little natural talent he'd worked his way up to the semi-professional leagues through sheer practice and determination. During his mandatory military service, he'd beaten up one of his commanding officers. Throughout his life, people would keep underestimating how long, how dirty, and how intensely he was willing to fight for something he wanted. In the proverb "never mud-wrestle a pig, you'll both get dirty but the pig will like it", the pig is Viktor Orban.

Regarding Modi:

If I was the person that cleans the car, I made sure to clean the car very nicely, so that even my boss thought: "That is a good boy, teach him to drive, he will be useful for our driving." Then I become a driver. So basically, whichever assignment is given to me, at that point of time, I am totally involved in it. I never think about my past, I never think about my future.

From the article about Chavez:

All dictators get their start by discovering some loophole in the democratic process. Xi realized that control of corruption investigations let him imprison anyone he wanted. Erdogan realized that EU accession talks provided the perfect cover to retool Turkish institutions in his own image. Hugo Chavez realized that there’s no technical limit on how often you can invoke the emergency broadcast system.

About Orban exploiting loopholes:

There was a rule that the Hungarian constitution could not be amended by less than a four-fifths majority. Unfortunately, that rule itself could be amended by a two-thirds majority. Orban used his two-thirds majority to trash the rule, then amend the constitution with whatever he wanted.

The ability to network seems important, for Orban:

Those thirty-six college friends must have seen something in him. They gave him his loyalty, and he gave them their marching orders. The predicted Soviet collapse arrived faster than anybody expected, and after some really fast networking ("did you know I represent the youth, who are the future of this country?") Orban got invited to give a speech at a big ceremony marking the successful revolution, and he knocked it out of the park.

Discretion and the ability to disguise your threat seems to have been important for Putin:

Possibly the most bizarre fact about Putin’s ascent to power is that the people who lifted him to the throne know little more about him than you do. Berezovsky told me he never considered Putin a friend and never found him interesting as a person . . . but when he considered Putin as a successor to Yeltsin, he seemed to assume that the very qualities that had kept them at arm’s length would make Putin an ideal candidate. Putin, being apparently devoid of personality and personal interest, would be both malleable and disciplined.

Similarly, for Erdogan:

Erdogan had been traumatized by the 1997 military coup. He knew that if he let a hint of his true colors show, the military would depose him in an instant. So he waited, continuing to pretend to be a boring center-right liberal. "What would a real center-right liberal do?" he asked himself, and figured he might as well deregulate some stuff and pass capitalist policies. The Turkish economy entered an unprecedented boom, and a (presumably baffled) Erdogan was hailed as a genius.

Intuitions About Requirements for Political Success

Even without concrete examples, I think we have fairly good intuitions about the abilities or personality traits necessary to succeed in politics.

Consider a person who has successfully mastered many scientific and cultural areas of academic learning, a polymath. My intuition is that such a person is relatively unlikely to be successful, if they choose to enter into some political endeavour.

Examining the reasoning behind this intuition I guess I might infer that this person is not fundamentally interested in or dedicated to politics (or they wouldn't have bothered to spend the time learning so many things not directly relevant to political goals), or that this person is naive and incapable of political manoeuvres such as horse trading and back stabbing (or they would have other ways to earn money and achieve status without having to work so hard).

But I would also expect that learning many things unrelated to or unnecessary for politics consumed some kind of learning resources or capacity, taking learning resources or capacity away from things that are more relevant or necessary (understanding human motivations, playing games around social status, networking etc.).

Expect Leaders to Delegate

We also have some good intuitions relating to what kind of abilities and personality are important in order to lead (or take control of) a company, I think.

Aside from the political manoeuvring and power games that might be necessary to get to the top (and to stay there), we can consider what makes a CEO effective in this position.

We don't expect a CEO to concern themselves with the detailed work and minutia involved in the day-to-day work of the company, and certainly not to have any kind of broad encyclopaedic knowledge, or wide ranging skills relating to this detailed work. Instead, CEOs are expected to delegate extensively and occupy themselves only with the most important strategic actions and decisions.

Current LLMs are Polymaths, not Political Agents

Coming back to AI, then, how should we think about current state of the art large language models (LLMs)?

Well, whatever you think of the general level of intelligence and reasoning of LLMs, it's hard not to be impressed by the breadth of knowledge and range of capabilities that can be elicited from these models when used appropriately.

We can make this more concrete by looking at diverse benchmarks across many fields and the variety of exams passed by LLMs.

According to OpenAI, GPT4 aced the bar exam, reading and writing sections of the US SAT, the USA Biology Olympiad Semifinal Exam, and passed Advanced Placement exams in diverse subjects like Art History, Calculus, Chemistry, Macroeconomics, Microeconomics, Physics, Statistics, US History and World History.

This same model can then also understand how to write code up to a point, in many different programming languages and translate between many different human languages (see here, and here).

If you spend some time with current LLMs, however, it doesn't take long to find weaknesses that balance these strengths, and these things are clearly not effective political agents.

With regards to effectiveness in game situations, from Large Language Models and Games: A Survey and Roadmap:

Mainly, LLMs suffer from hallucinations, meaning that they will output plausible but false statements simply because they are a probable sequence of words. Hallucinations are inevitable given how the world is described to the machine; LLMs lack grounding, so the text they generate is detached from constraints of reality. Yet LLMs always “act” confidently in their responses, even when wholly mistaken. Beyond hallucinations, LLMs suffer from factual errors, outputting responses that are wrong even though the LLM has access to information that proves otherwise.

also

current LLMs struggle to capture user intent during conversation—let alone more ill-defined concepts such as players’ emotion or engagement

and

On a larger scale, LLMs suffer from losing context and struggling with continuity. This is because the “memory” of an LLM is constrained by its context size, which limits the extent of its inputs and outputs, as well as its response time due to the attention mechanism.

On balance, I suggest that a general characterisation of 'naive academic polymath', as opposed to 'strategically astute political agent', currently seems to fit these things pretty well.

Power Play Overhang

So where am I going with all this?

The first point is really just to reformulate the agency overhang problem a bit, drawing on those intuitions about humans taking control.

So we should be concerned, more specifically, about the possibility that some future AI is more capable of operating as an effective political entity, and sources such as the Dictator Book Club series can then inform our intuitions about this, with the relevant abilities being things like charisma, single mindedness, ability to disguise threat, make effective plays in political games, and so on.

What is Not Required

OK, but the most important point, perhaps, relates to capabilities that are not required by an entity that wants to take control.

Characterising this very roughly, I suggest that such an entity does not require very broad polymath-like or polyglot-like detail knowledge or understanding about exactly how the world works.

A human entity would naturally delegate this understanding to others, and nothing prevents an AI entity from doing the same. (In the case of AI, this delegation could be to humans, and/or other AI entities.)

Movement Towards a Different Kind of AI

For the current state of the art, ingesting more and more data and learning all the things in this data currently seems to be necessary for improving general capabilities of large language models, and so, to some extent, these models are necessarily polymaths (and polyglots).

This can change, however.

There's a lot of work already going into developing more specialised models, and AI that is more capable of agent-like behaviour. Unfortunately, it looks like AI that's more powerful in this way is also going to be more economically valuable.

From this interview with Demis Hassabis, for example:

[Will Knight] Does that mean that the competition between AI companies going forward will increasingly be around tool use and agents—AI that does things rather than just chats? OpenAI is reportedly working on this.

[Demis Hassabis] Probably. We’ve been on that track for a long time; that’s our bread and butter really, agents, reinforcement learning, and planning, since the AlphaGo days. We’re dusting off a lot of ideas, thinking of some kind of combination of AlphaGo capabilities built on top of these large models.

And from Mark Zuckerberg here:

I think that there's going to be a kind of Meta AI general assistant product. I think that that will shift from something that feels more like a chatbot, where you ask a question and it formulates an answer, to things where you're giving it more complicated tasks and then it goes away and does them.

Game Playing AI

Machines already surpass humans in many game playing domains, of course.

Chess was once considered a bastion of human intellect, but fell to game playing AI in 1997, and technological progress since then feels like an inexorable juggernaut, crushing humans in increasing less restricted and more generalised game playing domains.

This includes Go in 2016, Poker in 2017, and StarCraft 2 and Dota 2 in 2019.

Game playing AI already embodies many of the capabilities required for political 'power play' missing from LLMs, such as the ability to understand context reliably, and make and execute plans effectively and single mindedly, but only in fairly restricted domains. The danger is then something that can do these things with enough understanding of the real world to function effectively in real world political games.

A Different Kind of Beast

So now we come to the central point of this essay.

From the post about agency overhang, once again:

My claim is that any of these relatively fast transitions from “systems with superhuman cognitive abilities on short time horizon tasks but poor planning and execution ability” to “systems that have these abilities plus impressive planning and execution ability” would be very dangerous.

I think that there is a subtle but important mistake here, and this mistake is something that's also being made, implicitly, by a lot of people in attempting to predict the future of AI.

The mistake is to be concerned about current AI plus some additional capabilities, as opposed to a different set of capabilities. What we should be concerned about is a different kind of beast.

Hypothetical Minimum Viable Product

Historically, progress in AI has involved a sequence of surprises (interspersed, admittedly, with the odd AI winter here and there).

The success of the transformer architecture behind ChatGPT was itself a surprise that took AI developers in a different direction, and we should be open to possibilities for different architectures and approaches in the future.

I suggest that the beast to worry about is some hypothetical future system, with a new architecture, (or with important changes to an existing architecture), but that this should also then be a hypothetical minimum viable product for the set of capabilities actually required for dangerous accretion of power and control.

It could be useful to think about what needs to be added to current game playing AI to get to this minimum viable product. (It's clear that our hypothetical future system will require more general intelligence than current game playing AI, but how much more?)

It's interesting to consider a minimum viable product for dangerous fully autonomous AI, but we can also take some lessons from Systems Theory and consider systems that fall short of full autonomy but nevertheless integrate with human, corporate, and political entities into something that is then dangerous as a whole.

(According to this paper requested by the European Commision, "President Putin has said that whoever leads AI will rule the world.")

Ball-Park Risk Estimation

It's hard to predict the future.

I'm really just nodding at some possibilities here (as well as pointing out that this is hard), and for a lot of people this is going to come down to some kind of ball-park estimation of risk.

And then the way we frame these estimates is important, and will significantly affect the results (whether actual numbers or just feelings).

Framing this (consciously or not) as risk from "current AI plus additional capabilities" is a mistake.

Instead of imagining the effects of scaling up from 10^25 FLOPS of training (or 175 billion network parameters) by another couple of orders of magnitude, while still extracting all the information required for a polymath, we need to imagine what happens if we scale these things up but also spend the learning resources in a much more targeted way.

Conclusion

OK, so to wrap up:

Let's learn some lessons from history.

Watch out for traps that prevent you from thinking rationally and reacting appropriately as potentially disastrous outcomes play out around you. Don't assume that a normal state of affairs must continue necessarily into the future. Instead, take time to engage properly with key questions that can all too easily end up consigned to some science fiction 'far mode'.

If you didn't already consider the question closely, try and work out what you think the continued development of machine intelligence means, and how you expect this to play out in the real world.

And then, if you're concerned about AI, try and be careful to calibrate your expectations to the appropriate kind of beast, which isn't the LLM chatbot you can interact with today, no matter how impressive this may seem.

Final Thoughts from Claude.ai

In response to this article, and the question: "Are there accepted means of overcoming normalcy bias which might be useful to stimulate (political) action?"

Claude suggests:

To overcome normalcy bias and stimulate political action, consider the following strategies:

  • Educate and raise awareness using historical examples and data.
  • Utilize compelling stories and narratives to illustrate potential impacts.
  • Encourage critical thinking and questioning of assumptions and biases.
  • Emphasize the benefits of taking action and the positive outcomes.
  • Create a sense of urgency by highlighting the time-sensitive nature of the situation.
  • Leverage social proof by showcasing actions taken by others.
  • Engage influential leaders, organizations, and media outlets to raise awareness and mobilize action.

By employing these strategies, it may be possible to overcome normalcy bias and stimulate the necessary political action to address potential risks associated with advanced AI development.