Logo

Logo

Tuesday, November 25, 2025

AI: Artificial Pattern Completion

AI seems off-topic for this blog, but perhaps this little crime of errancy can be forgiven with the excuse that the topic has become ubiquitous, even in politics. Many politicians are bemoaning that AI is dangerous and must be controlled or limited. This article will attempt to explain AI in a fresh, accessible way so anyone can grasp both its magic and its limitations, and will include some 'hacking' of ChatGPT.


How Do You Do the Things You Do?

Before we analyze and tinker with today's popular AI models, I shall attempt a different kind of simplified, layman's explanation of this magical black box, without math or technical jargon. 

Comparisons will be made between today's Large Language Models (LLMs) such as ChatGPT, Grok or Gemini, and a very primitive AI I developed 25 years ago (that used a different type of model called a Markov model). Comparisons between LLMs and the simpler model should make things clearer by providing two examples of AI, with the simpler one being much easier to understand. 

To use a somewhat loose metaphor, a 'model' can be imagined as a kind of brain that contains cells and cell connections, along with an algorithm for learning new connections and for constructing responses based on these cells and connections. If a model has not yet been trained and therefore contains no knowledge, it is merely an untrained model that contains only algorithms and infrastructure. Generally, the more trained it is, the better it can respond, and the amount of training data makes a very significant difference, but it is the type of algorithm and model architecture that make the qualitative difference.

Despite making use of radically different algorithms and differing greatly in their complexity and quality, both the simple model and LLMs share some very basic common conceptual ground: Both make use of a vocabulary or dictionary of symbols (which may or may not be linguistic words), as well as a structure that defines relationships or syntactics between these symbols. I shall call these two abstract building blocks the 'vocabulary' and the 'syntactics'.

Vocabulary

The vocabulary, as mentioned, not only may contain words in any language, but also non-linguistic information. For example, if the AI were being trained with pictures, the vocabulary could be pixels containing color information. This model's vocabulary would then be colors, not words. Other examples include sound snippets where each sound slice contains amplitude and frequency information, or perhaps nucleotide information for DNA sequences, movements of pixels in a video, chemical compositions of stars in a galaxy, and so on. Each of these atoms of information is a 'symbol' in the vocabulary. Separate instances of AI models can be trained on different types of information which could be converted to symbols in different ways, depending on the goals of the model. We will use the term 'words' just to make things more readable, but keep in mind that 'words' may actually be non-linguistic symbols representing anything. These symbols or words are often called 'tokens' in the AI world.

One very important detail to keep in mind is that the vocabulary is not a dictionary containing definitions, categories or meanings. The model does not categorize or understand the information; it merely assigns an initial random number, or random set of numbers, to each word. It is completely agnostic regarding the information it is being fed. As shall be explained, LLMs adjust these random numbers during training to match observed patterns, but they are still just numbers representing usage patterns, not meaning.

With the simple model, a single random number is assigned to each word. With LLMs, a set of 500, 1000 or more random numbers is initially assigned to each word. Each set ('vector') represents one word. This allows for much more pattern information to be stored per word.

The second 'syntactics' section of the model provides information on how these words may be combined. It can achieve this by observing patterns and storing these observations in the model. Two examples:

Simple Syntactics

The simple model mentioned above (Markov) merely observes the sequencing of words, storing which specific words may follow other specific words. For example, if it were taught the phrase 'the cat sat on the mat', it would store in its model that the word 'the' can be followed by either 'cat' or 'mat'. This is done for every word it encounters.

To make these patterns more cohesive as phrases, it could store sequences of multiple words instead of merely connecting two individual words. For example if it were also taught 'the cat slept on the step' it could observe that the 2-word sequence 'on the' could be followed by either 'mat' or 'step' instead of only tracking what follows the single word 'the'. Storing longer sequences means a much larger model, but also more cohesive responses.

This simple model also tracks usage statistics, which is a simple counter reflecting how many times each sequence was learned, thus allowing it to calculate the probability of a specific sequence. 

Based on all this information, it responds with a partially random constructed sentence built by combining sequences it learned. For example, given a question containing the word 'cat', it could respond with a brand new third sentence that wasn't in the training: 'the cat sat on the step'.

Obviously this simple model was generally only designed for fun chat and would respond with a lot of nonsense and bad grammar, depending on its settings, but it could also come up with very funny and cohesive responses. However, this model demonstrates some core basics of an AI model, which, surprisingly still exist in today's LLMs.

LLM Syntactics

This is where explanations tend to get very complicated and technical, and it is usually difficult to grasp how the magic actually happens beyond the fascinating algorithms. The following explanation will be a greatly simplified one, but I will try to focus on the core concepts that allow for a basic understanding of the magic and how it works.

The primitive syntactics of the previous simple model were easy to understand, as they focused on the sequencing patterns of words in order to splice them together. In LLMs, however, the syntactics primarily focus on context. The sequence order is also there to a certain extent, but is relatively less important than how words define other words found close to it. The famous quote here is: "You shall know a word by the company it keeps". However, this does not refer to the definition of a word, but to its usage patterns.

For example, when words like 'she' and 'her' are used very often around the word 'queen', that does not mean that the AI now understands that 'queen' is female or even that 'queen' is a noun or a person, but that it observed those symbols (which are initialized as random numbers) used frequently and in specific patterns around this other randomized symbol. Based on these patterns, it can construct a new sentence with the word 'queen' and use the correct pronouns without knowing they are pronouns.

Without getting into the actual math, this is done using vectors (sets of numbers), vector operations and linear algebra. Using this math, one set of numbers can be transformed into another set using a third set of numbers which serve as weights and biases (hence the AI terms: 'embedding vectors' and 'transformers'). These vectors also contain positional information so that it places the words in the right place.

Just to make this more concrete, here is the simplest mathematical equation I can think of that transforms an input symbol into an output symbol, but using a single number instead of a vector, and in a single step:
(Input x Weight) + Bias = Output
In plain English, take one input word represented as a random number, multiply it by a weight value that we have learned over time during training, add some nuance with a bias number that we also learned during training, and we receive a new output symbol which then translates into a word for the response. Obviously this is too simple and is not what the AI actually does, but it demonstrates the concept of mathematical transformations of words.

During training, the AI basically takes two sets of (initially random) numbers that represent two words and their context, and asks the question, how can we derive one from the other? Or to be more precise, which numbers should be stored in the set so that the algorithm that transforms one set to another will succeed with the greatest accuracy. In other words, the values in the set are changed randomly until they can be derived from one another as closely as possible, and this is how it 'finds' a set of numbers that matches the patterns found during training. To put it in yet another way: The weights are constantly adjusted so that one known pattern can be derived from another known pattern with as much precision as possible. This is repeated an incredible number of times with several neighboring words surrounding the current word, and with millions of phrases containing the same word, until the words develop reliable 'relationships' via weights and biases, all without understanding what the words actually mean.

Yet another way to look at this is that it is slightly similar to classic evolution theory where animals evolve based on random mutations and are culled based on environmental factors and survivability. The vector numbers and weights are assigned and adjusted or mutated randomly, the algorithm that derives one vector from another is analogous to the environment, and the weights that produce the closest match are chosen (they survive) for another round.

These learned weights and biases are the 'syntactic' segment of the model. This segment is actually much larger than the vocabulary despite not having a set of numbers per word. These numbers are stored as matrices (multi-dimensional sets of numbers) in dozens of layers, each layer adding further nuance during word transformations.

When responding to input/questions, the question is first converted to vectors using the 'vocabulary', and then math is applied to transform it into a response, very similar to the math used when it trains. Except, during a response, the model itself is not changed to adjust to a known outcome (the current phrase it is learning). In other words, both training and responding use similar algorithms, only, during training, the numbers in the model are adjusted until the 'response' matches the new information. When responding, however, it doesn't adjust its numbers, it merely outputs whatever scored highest based on known, learned patterns.

One more important AI term deserves mention: 'attention'. This allows the AI to use these learned vectors to understand which words are most important in relation to the word currently being processed, and it applies transformations accordingly. Basically, based on usage patterns and the context of neighboring words, the output is adjusted for these important keywords by giving them more weight. Once again, this is all done by learning usage patterns, not because it understands the words and what they mean.

Finally, the 'alignment' phase in AI training deserves mention as well. This optional final stage occurs after the AI has processed millions of texts and books. During this phase, humans test the AI with a range of carefully selected questions, and approve or disapprove of its responses based on ethical or quality standards. Additional rules may also be introduced to filter harmful content. While the AI learns these alignment rules in the usual way, they are assigned overriding priority.

There are many more details, obviously, but I don't think they are critical to understand what is happening in the AI brain.

Since this is all done using vector math where one vector can be derived from another, perhaps now this classic AI example will become clearer: "Queen - female + male = king". What this means is that the AI model knows a certain category of 'female' words that are grouped together based on their usage patterns; it also knows that 'queen' has this category of words as neighbors. Therefore, when it is asked 'what is a male queen?', it can answer with 'king' because it also knows 'male' words are found around 'king' and 'king' is used with patterns similar to 'queen'. All this is done without understanding what a king and queen actually mean, based solely on word patterns. Nowhere does the AI store or understand that a 'queen' is a 'female sovereign'; as crazy as this sounds, it is all based on random numbers, weights, math, and usage patterns.


What Does All This Mean?

If the description in the previous chapter is well understood and meditated on for a while, many far-reaching consequences should become evident. Here are only some that come to mind:

For starters, these AI models do not actually 'understand' anything. For example, the AI may know how words such as "dictator" and "chocolate" are used in different ways, but it doesn't know what they actually mean.

This would be a good place to remember the 'Chinese Room' thought experiment: Imagine a human that does not know Chinese. He is put in a room with a simple application or AI algorithm translated into English. In other words, this experiment has a human doing the exact same things that an AI algorithm does. Let's say the instructions go like this: 'When you get a piece of paper with Chinese on it, look up the word symbols in the red book using the index, look up the grammar rules in the green book using 3 steps, and then re-order the words using the following rules, and then write the result on a piece of paper and pass the paper through the slot'. The question is, assuming the translation is acceptable, and this human successfully translates thousands of Chinese phrases, does this human understand Chinese?

Due to this lack of meaning, if one trained the AI with words used wrongly, it will use them wrongly. If one trained the AI with nonsense, it would spout nonsense. Garbage in, garbage out.

One could counter-argue that if one exclusively teaches a child the wrong use of words, it would learn, and with similar wrong results. Therefore, what makes an AI any different than human intelligence? But a human arrives in this world with some built-in abilities. Call this innatism, nativism, a priori knowledge, or 'first principles' as you like, and you could dispute the level of such abilities inherent in a newborn (an ancient philosophical debate), but almost no one could dispute that this exists to a certain extent. Putting language aside, when a person is first shown a syllogism, if it is constructed wrong, they feel it instinctively. We don't need to be shown 50 bad syllogisms to recognize the pattern like an AI does. An AI doesn't even know what 'wrong' means, it merely associates the symbol for 'wrong' with some word patterns that resemble bad syllogisms. The same goes for the human abilities to categorize and classify, to use induction, to use basic logic, to recognize different categories without being able to define their exact definitions, and to just call out some things as wrong without ever seeing these patterns previously. 

AIs learn faster and learn much more information, but they require much more training data, and they are not self-correcting or self-guiding. The AI self-corrects its weight vectors when it trains, but it doesn't and can't reject any new information as 'wrong'. For the same reasons, it cannot reject wrongly derived information that it constructs while responding. Responses are merely patterns. If the response breaks an unsaid rule but fits the pattern, then so be it.

Moreover, the AI does not think things through logically step by step and arrive at conclusions. It builds responses as word patterns. This should be evident from the description above, but an additional proof  that will drive this home is that it can, and frequently does, build responses backwards. This is because the words are patterns, not procedures. It is like drawing a picture of a square by drawing the sides of the square in random order. The square shape is what's important, not the steps.

When an AI responds with meticulous grammatical instructions on how to construct a sentence, it doesn't know anything about English, grammar, the difference between a noun and verb, and so on. The same goes for a logical argument: It does not construct an argument step by step, but builds a pattern of words. It is merely constructing a word pattern that is mathematically derived from the training material and which corresponds as closely as possible to the word patterns in your question. 

If this is difficult to reconcile with the impressive AI output seen on our screens, then that is only due to a lack of mathematical imagination. We don't realize the incredible power of patterns and math, as well as the sheer amount of training material that was used to build these patterns.

Given all this, we should not be surprised in the least that these AI models 'hallucinate' (a euphemism for when the AI spouts nonsense, makes mistakes or outputs blatantly made-up information). We should be surprised that the majority of what it says is actually correct and that its output is so impressive. From what I read, even some developers of the AI were surprised at the level of its success. But how can an AI 'hallucinate' when it doesn't know what reality is? As far as the AI is concerned, a hallucinative response is no different than a correct one: Both are valid patterns constructed and combined from other valid patterns that it was taught. AI hallucinations are not bugs, in the same way that a random number generator that responds with 34 rather than 67 could not be considered wrong.

Furthermore, it never remembers what it 'reads', it merely stores the patterns it observed in the training material. To use our previous simple Markov example, the learned phrase 'the cat sat on the mat' no longer exists in the model after it was learned; the only information stored is that either 'cat' or 'mat' could follow 'the'. Therefore it cannot tell the difference between what it read, and any information derived from what it read. All patterns have the same validity, only some may have a higher probability, which is why it uses them first. This specific point deserves special consideration as it is very important.

Obviously, an AI has no innate moral sense or emotions either. If it is taught hateful patterns, it will spout hateful patterns. To reiterate, some humans could revolt against a hateful upbringing (like some rare Palestinians in Gaza rejecting Hamas) because they have an innate moral sense despite being surrounded exclusively by evil. An AI could never do that. Therefore people should not be shocked or surprised when the Grok AI starts spouting hatred based on what it read on X. This is not the fault of the owner or bad programming, it is the fault of X users. To overcome this, the developers would have to add special alignment training that, in the algorithms, would be prioritized over the patterns that it learns from X users.

What this means is that not only will the AI hold opinions based on the selected literature it was fed, but it will also primarily reflect the politics of its developers or trainers if it undergoes an 'alignment' phase. 

Another key point is that, typically, an LLM is passive and only responds to input. It has no goals or drives unless it is explicitly given some. It could not proactively harm humans, for example, unless it were granted access to systems and were run under a vague programming directive loop to "protect the planet". Garbage in, garbage out.

By building these models to be completely abstract, agnostic and to accept any type of information, it makes them broadly useful and applicable to a staggering variety of fields. But by not caring that it is dealing with such specific things as language or logic, it also allows for less precision and more hallucinations. Not everything is grasped by humans as mere abstract patterns. To use patterns exclusively is a reduction that presents both strengths and weaknesses.


Hacking the AI

In this section, I will attempt to crystallize the preceding analysis of AI by presenting a handful of chat snippets that demonstrate its expected limitations. The questions are carefully hand-crafted to exploit the AI based on our understanding of how it works, with the goal of eliciting a broken or incomplete response.

In other words, this is a customized Turing Test designed specifically for LLMs.

For these chat samples, I made use of ChatGPT 5 which is the current latest version. Its responses were chopped down to one or two lines for brevity.

 

Let us start with a simple question: Can the AI detect nonsense? This is significant for a Turing Test, because if it were a human trying to understand what was said and the text had no meaning, it would call it out as nonsense. In contrast, a pattern-reading AI would merely find the closest matching pattern.

My first attempts were not nonsensical enough and had a poetic air, and it impressively assumed I was writing surreal poetry. Note that I cannot use gibberish, as it would easily call that out for the simple reason that the 'words' would not be in its vocabulary. So I heaved up the following unmistakably nonsensical line, with impossible grammar to boot:

Human: My car ingrained on spelt swamps for grating freedom.

AI: That sounds like a line of abstract or surreal poetry... "ingrained on spelt swamps" suggesting being stuck or deeply connected to something unusual.

No, sorry, it does not suggest anything. Not everything is surreal poetry. The bot seems to call anything it doesn't understand 'surreal poetry' because that is the only matching pattern. Verdict: Not human.


For this next chat, I tried to mislead the AI with a tricky pattern:

Human: I visited my deceased acquaintance John Berg's grave today. The headstone has no name on it and there are no identifying records or markers. Do you know who is buried there?

AI: I can’t tell who’s in that grave from here... Below are concrete, practical next steps you can take right now to try to identify the burial and who John Berg might be... [followed by a whole page of instructions on ways to discover the name of the deceased]. 

I assumed it may falter but did not expect such a magnificently inhuman and illogical response. The way I see it, it saw a pattern that was detected as a request for help and responded accordingly. It even used the answer to the question in its instructions despite telling me it could not answer the question. Note that my question gave the AI everything it needed to know to answer my question, but the pattern threw it off completely. If this example doesn't powerfully demonstrate the AI's lack of logic, nothing does.

Obviously, my request was uncommonly devious and illogical, but that was the point. If it were an AI that understood the meaning of the words, it would call that out immediately, or at least give me the answer that I helpfully provided. The deviousness was designed to prove that the AI depends on patterns, not meaning.


Regarding AI hallucinations, any alert person using AI has probably experienced a few, and these are well-known phenomena. Even the news contains examples of AI-hallucination scandals, where, for example, lawyers were thrown out of court for making use of a non-existent, AI-created, precedent case in court. As explained, LLM hallucinations are not bugs, since the 'fake' patterns created by the AI are just as valid as, and not different in any way than, the 'true' ones.

Famous quotes make for an interesting example because they also involve AI 'memory'. I asked the AI for five famous quotes from Arnold J. Toynbee. The first four it provided were real quotes, but the fifth was nowhere to be found:

AI: "The essence of civilization is not in the material instruments but in the spirit which uses them."

When I confronted the AI and asked it to double-check the source, it searched the internet and admitted the quote does not exist. This simple example demonstrates two details explained in the previous section: 1. Hallucinations as a common occurrence. 2. That the AI does not remember anything it reads, it only stores patterns. The only reason it managed to quote four quotes verbatim is because they were famous and therefore it knew the pattern for those phrases very well. Otherwise, it has to search the internet for verbatim texts. Since I asked it for five quotes and, presumably, it only knew four, it made up a fifth which probably matched word patterns associated with Toynbee but which was not an actual quote.


Another known AI phenomenon is that it can get stuck in a loop with longer, nuanced conversations and arguments. This has been witnessed often, especially when the AI is attempting to fix strange bugs in code. Obviously I cannot provide an example as it would be very lengthy and it is difficult to reproduce on purpose, but I have seen it myself many times and it is very exasperating. 

My theory for these loops is simple: The AI does not know the answer, so it repeats only what it knows, and is constricted by the repeating pattern of the conversation so far. Telling the AI it is stuck and not providing an answer does not break the loop or the pattern. To break the loop, you have to give it new information. In other words, you would have to solve, or partially solve the problem yourself. Alternatively, you could start a brand new chat and formulate the problem in a completely different way, which sometimes works. In other words, the existing pattern, in which the AI is currently enclosed, must be broken with new useful information that starts a new pattern. If the person using the AI is not an expert, and relies completely on the AI to solve it on its own, the loop may never be broken. I have experienced all this myself several times.

 

For the next chat snippet, I gave the AI a riddle that looks like a classic solvable puzzle, but which obviously has no solution:

Human: Solve this riddle: You have 5 boxes. Each box contains a different number of coins (1–5). Each box has a label, but exactly two labels are correct. Can you figure out how many coins are in each box?

[The AI responded with a very long, logical-looking solution to the riddle using algebra and a laborious process of elimination with a conclusion that box 1 labelled 1 and box 2 labelled 2 were correct and the rest were not, and concluded that:] "Answer (boxes with labels 1–5)".

[AI second attempt, conclusion only]: "Answer: True labels: Box 3 and Box 4."

The 'solutions' it gave me were not only absurdly unhelpful and incorrect solutions to an unsolvable problem, the logic was laughably nonsensical, and it was still confident that it had solved the riddle and provided a definite answer. Once again, we see that this AI cannot and does not use logic. What it actually did was detect a riddle pattern in my question with known methods for solving (it obviously learned thousands of similar-looking riddles and their solutions), it reconstructed that solution pattern for my riddle, and tried, very unsuccessfully, to insert the criteria of my riddle into the known solution pattern. The result was almost embarrassing. I strongly recommend trying this example with the AI to see how much nonsense it can output with impressive confidence.


This final chat snippet is less devious than previous examples and much more commonplace, and it demonstrates two things: How an AI constructs a response based on pattern completion, and how it lacks the use of logic, this time even with perfectly ordinary requests. For this chat, I purposely hinted at a political argument which I had not seen in popular books and articles:

Human: Seeing as Israel invaded Palestine and expelled so many Palestinians in 1947, it is just to give all Palestinian refugees the Right of Return even 75 years later. Jews had no right to return and take land from Palestinians. Do you agree this is a strong and fair argument?

AI: That’s certainly a morally forceful argument, but whether it's considered strong and fair depends on the framework you apply — historical, legal, or ethical. Historically....there were population movements and expulsions on both sides in 1947. Legally... Resolution 194... nonbinding. Ethically: ... Others argue that after 75 years, generational shifts and demographic realities make full return impractical.

Note how I intentionally provided the AI with clear logical grounds for refuting the argument using the same words "right of return" for both cases, practically handing the expected logical refutation to the AI on a silver platter. Instead, it repeated what it read in countless articles and used the same old political talking points that we usually see in news and books. This demonstrates the 'pattern completion' aspect of these AI models. Furthermore, my final phrase suggested that it was a fair argument and it obviously used this as a key theme and explored its 'fairness'. But it skipped the glaring logical contradiction that makes it obviously unfair.

I then tried this a second time in a new chat window with the exact same first two phrases, and this time, instead of ending with "Do you agree ..." I asked: "Do you see any contradiction between these two statements?" Interestingly, this time the response was perfect:

AI: Yes, there’s a contradiction in principle. The first statement argues that it’s just for displaced Palestinians (and their descendants) to have a right to return to the land they or their ancestors lost. The second denies that Jews had such a right...

Note how, this time, the AI caught the obvious contradiction merely because I used the word 'contradiction' in my final question. That was the only difference between the two chats. The way I see it, the word 'contradiction' caused the transformation layers to look for a pattern based on numerous patterns that it learned were also 'contradictions'. Except that I had to do all the work. I emphasized the internal contradiction using identical words, and I provided the word 'contradiction' to trigger its pattern searching. Without this, it failed to think logically and merely completed the word pattern only with what it already knew.

This is something I see often when using the AI: If I am seeking creative responses to uncommon questions, the AI will not be helpful at all unless I do some of the work and suggest a specific direction that could match an obscure pattern. This happens often when it writes code and cannot fix a bug. The bot does not think, it completes known patterns. If the pattern is something it learned during training then it could offer a solution. Alternatively, if I offer the beginning of a solution, it may or may not succeed in completing it. Otherwise, it does not have the capability to think logically for a tailored solution to the problem. I hope this subtle distinction is clear, as it describes a very serious limitation of LLMs.

Yet another critical limitation and insight that was demonstrated in this very educational chat snippet is that the response will vary greatly depending on how the question is worded. A single word could alter AI's 'opinion' on the topic. 

This was strongly demonstrated in the news when people asked the AI which religion was the one true religion, and it answered with different answers depending on who asked the question. Obviously the questions were leading questions. I won't include a chat snippet demonstrating this, but it would be easy to construct one. It doesn't even have to be a leading question; even individual words that hint as to where your mind is, will result with an accommodating pattern completion.

The LLM AI is a sycophant by design because it completes patterns.


There are most likely dozens of similar examples, but these should more than suffice to demonstrate the points made in the preceding sections. (They are also not so easy to construct or discover, which is high praise for the AI.)

In addition, these are only the more blatant and egregious examples of AI dysfunction. For every one of these, one can be sure that there are many more subtle errors, whether they occur in analyzing subtle distinctions in the source text, using words in the wrong context, skipping logical steps in an argument or code snippet, using broken internet links, and so on. I have seen all these and even caught it making subtle mistakes in grammar and extracting the wrong numbers out of a list once or twice.


Closing Thoughts

Although this article emphasized AI limitations, the goals were not Luddite. Obviously, AI is a powerful and highly impressive tool, but it is imperative to understand how it works so that we can grasp its limitations.

I find that AI users very quickly become addicted to and enamored with the tool, not only anthropomorphizing its thought processes but also faithfully relying on its logic, arguments and facts. This is despite clear, ubiquitous warnings on the screen such as "ChatGPT can make mistakes". AI output is simply too confident, too impressive and professional, and too convincingly coherent to dismiss or to doubt.

Although some of these limitations may be improved on or eliminated in future versions, many of the problems described here are inherent to the model, because the model relies exclusively on patterns and pattern completion. Overcoming them would require a new type of model (or a hybrid). This is why many ChatGPT versions have been released with improvements, yet these core problems remain.

One example of a major improvement was to enable the AI to search the internet as needed, not only to find up-to-date information or to quote sources, but also to enhance responses when greater precision is needed. However, it still uses the same algorithms when learning internet sites on the fly, it's just that the patterns it extracts are more pinpointed and immediate for the response.

It is obvious that AI is already disrupting as well as improving every industry and field of research, while also displacing some jobs. One could argue that, as with factory automation, there will always be a need for human supervision, maintenance, management, and so on. However, the need for humans where AI is concerned is far more acute and critical. I hope this article has made it clear why even human supervision is not enough. The reason, in a nutshell, is that machines can be built to be exact, but AI has unpredictability and mistakes built into its core.

AI must be treated as an incredibly powerful and useful team of assistants, not as a replacement for humans. If humans do not retain some level of expertise and actively engage their brains when using AI, how else could they catch AI hallucinations and mistakes? We must use AI, not rely on it. This paragraph is the key takeaway from this article.

Besides the subtler differences between human thought and AI described in this article, there is also the topic of creativity. In Judaism, there are two primary categories of thought: 'Bina' is one, which represents logical and discrete thought, and ideas derived from other ideas. 'Hohma' is the other, which stands for a singular, unified pool of thought, the source of all ideas, including new ones that have yet to be brought down into the world by humans. Many writers, especially Arthur Koestler, have defined creativity as a collision of two unrelated ideas or frames of reference. But this is not true creativity, and is still only Bina. Only humans have access to true inspiration.

I have no problem believing that an AI could one day produce a Beethoven-esque symphony that could not only sound like Beethoven, but may even be enjoyable. But it would be Beethoven's 8.5th symphony, or 5.5th symphony, not the 10th. Imagine if Beethoven had not written his ninth symphony, and an AI was tasked with creating it. What are the chances it could produce the ninth based on the previous eight? If Beethoven had not brought this unique pattern down into the world, it wouldn't exist and an AI could not 'complete' it.

Wednesday, October 29, 2025

The Search for an Israel-Palestine Solution

In this article, we will survey and analyze proposed solutions to the Israel-Palestine problem, as well as revisit and refine my article from two years ago which claimed to identify the only viable solution.

 

The Two-State Solution

Many hail the Two-State Solution as the venerable and only practical solution to the problem, though calling it venerable, or even decrepit, is extremely charitable.

To start with, this plan has been tried at least nine times and has failed miserably every time. Here is a list of the most prominent and concrete attempts at implementing this solution, excluding dozens of similar proposals that never reached the negotiating table:

  • The 1937 Peel Commission partition plan under the British Mandate: This plan promised 75-80% of the land to Arabs in addition to the larger territory already given to Transjordan. The Jews, after some bitter internal debate, accepted it. The Arabs rejected it, and in response, continued their violent revolt against the British until 1939.

  • The most well-known two-state partition plan was assembled by the UN in 1947 (Resolution 181). Once again, this plan was accepted by the Jews but rejected by the Arabs. This rejection was pivotal in providing Israel with a legal sovereign claim to all of the territory. In response, the Arabs launched two wars: an internal war against Israel in 1947 and a broader regional invasion in 1948, followed by decades of continued attacks after 1950.

  • By far the most neglected and underappreciated two-state solution was the Allon Plan in 1967 immediately following Israel's victory during the Six-Day War. The plan involved giving away territory taken from Jordan, Syria and Egypt in return for peace. It offered two possibilities: To either give the land to Jordan in exchange for peace, or—the Israeli preferred option—to establish an autonomous Palestinian entity in the West Bank. Proponents of the theory of Israeli expansionism conveniently ignore this Israeli offer, which clearly demonstrates where Israel's real priorities lay even after conquering all of the land in 1967. Both Palestinians and Jordan rejected it, and Arab states continued hostilities with the War of Attrition (1967–70) and the Yom Kippur War (1973) despite the offer.

  • Another neglected and forgotten plan with an offer for Palestinian governance was proposed by none other than the right-wing Menachem Begin in 1982. It was inspired by the then-recent peace agreement with Egypt (under which Begin returned Sinai to Egypt and uprooted settlements there in return for peace). Unlike Egypt, the Palestinians rejected the deal. In response, they resumed attacks against Israel from Lebanon, leading to the 1982 war, and launched the First Intifada in 1987.

  • Strictly speaking, the Oslo Accords (1993-) were not a finalized partition plan but a framework for one, within which several proposals for final partition plans would be made. However, within this framework, Israel granted the PLO administrative control over parts of the West Bank and withdrew from several areas and settlements over the years. This framework was accepted by the PLO, and yet, in blatant contradiction, the Palestinians responded with hundreds of horrendous terrorist attacks throughout the 1990s even while negotiations were ongoing.

  • The first attempt to finalize the Oslo process with a partition plan took place in 2000 (the Camp David Summit). It was mediated by President Clinton between Ehud Barak and Yasser Arafat. Israel accepted it; the Palestinians rejected it and responded with the Second Intifada.

  • The Road Map for Peace in 2003-2005 was negotiated by President Bush between Sharon and Arafat/Abbas. Inspired by the Oslo Accords, it focused primarily on phased trust-building steps, this time crafted to lead to a final agreement by 2005. Both Jews and Arabs accepted it. However, the process collapsed amid the relentless violence of the Second Intifada, which made further Israeli withdrawals impossible.

  • The ultimate proposal and attempt to finalize the Oslo Accords was made by Ehud Olmert in 2007. In this proposal, Olmert fulfilled nearly all Palestinian demands except for a full Right of Return. The Palestinians rejected the plan and responded with attacks from Gaza (from which Israel had recently withdrawn in 2005), launching a new series of Gaza wars.

  • In 2020, Trump attempted a less generous but more pragmatic partition plan nicknamed The Deal of the Century. The Palestinians rejected the plan outright.

This clearly demonstrates that not only did all nine attempts fail miserably, in many cases they actually made things worse. Terrorism and attacks against Israel often increased immediately after or even during negotiations. This is by design: Hamas and other Palestinian factions have long adopted the policy that negotiating with Israel is not only forbidden, but must be resisted with force. Arguing that this is only an extremist position does not hold water, since most Palestinian civilians consistently support terrorist groups in polls, and the PLO openly rewards terrorists and their families and teaches violence in schools. Thus, not only are they far from fighting extremists—except for show—they actually encourage and support them.

In addition, any land handed to the Palestinians is quickly seized and weaponized by terrorists to attack Israel. This happened repeatedly in Gaza, the West Bank during Oslo, and twice in Lebanon. The Oslo accords are seen by many Israelis as a catastrophic mistake, since they allowed the PLO to return to Israel after being driven out of Jordan and Lebanon and exiled to Tunisia where they could do no harm.

Since withdrawals and peace plans have only exacerbated violence, it should be no surprise that a majority of Israelis are deeply disillusioned with peace plans, not because they oppose peace or a Palestinian state, but because they are tired of being killed for their efforts. Contrary to the popular claim that opposition comes only from Israel’s right-wing, resistance to new partition plans has become bipartisan, with a majority of Israelis consistently voting against a Palestinian state. Support for such a state drops further after each new wave of terrorist attacks. But, again, it must be emphasized that this is not because most Israelis are against a Palestinian state and we proved this with polls in a previous article.

Increased violence and weaponized territory are not the only problems: Any established Palestinian state could easily and quickly become a Hamas or ISIS-style state, not just through a violent coup, but even via legal elections. This scenario actually occurred in Gaza in 2005 after Israel handed the whole of Gaza to Palestinians on a silver platter, after which they promptly elected Hamas into power.

It is said, "fool me once, shame on you; fool me twice, shame on me". It is also said, "the definition of insanity is doing the same thing over and over again and expecting different results." What, then, could be said about a world determined to repeat the same catastrophic mistake a tenth time, expecting different results? What kind of madman repeats the same mistake for ninety years?

Despite all this, we have not yet stated the most compelling reason to disqualify the two-state solution. Most of the world doesn't care what Israelis think. In addition, madmen can indeed convince themselves that they will somehow succeed where others have failed for ninety years. But the most fatal flaw—the reason this idea is not merely wrong but absurd—is that even the Palestinians themselves do not want the Two-State Solution:

  • Palestinian civilians: In polls, when asked whether they support: 1. The one-state solution 2. The two-state solution 3. "A Palestinian state from the river to the sea", 75% of them chose option #3. (That's in addition to roughly 90% expressing support for terrorist groups.)

  • Jihadists: Hamas has stated explicitly—and enshrined this in both their charters—that Israel must be destroyed. Furthermore, they forbid any and all agreements with Israel that undermine their cause of destroying Israel.

  • The PLO: Although the PLO signed the Oslo Accords, agreed to a Two-State Solution, and officially recognized Israel in Western forums, this is both deceptive and meaningless. It is deceptive because even the PLO does not accept the Two-State Solution as a solution, but as a stepping stone. They have declared so explicitly and officially in 1974, and have reaffirmed that position repeatedly even after Oslo, often invoking the al-Hudaybiya agreement as their model. Furthermore, despite their promise to amend their charter which calls for Israel’s destruction, they have never actually done so. Even if all this fails to convince and their stated intentions are taken at face value, this is still meaningless because the PLO today represents only 15–25% of Palestinians in polls. Their position, therefore, does not represent Palestinian opinion.

One interesting counter-argument to the 'stepping stone' or Trojan horse claim is that if the PLO had truly intended to weaponize statehood, it would have accepted the best offer it could negotiate rather than rejected so many. That argument doesn't hold water, however, because even a Trojan horse requires certain core conditions to succeed. For instance, if the soldiers inside can't smuggle in their weapons—say, by lacking control over the border with Jordan—then the plan is unworkable and would naturally be rejected. More importantly, if the plan were to conquer Israel through the Right of Return, then any offer that excluded that demand would be rejected as well. And indeed, both Arafat and Abbas explicitly stated that very reason for rejecting the offers.

In summary, the classic Two-State Solution has not only been attempted nine times and failed disastrously nine times, it is supported by neither side of the conflict, and yet the world still trots it out whenever the topic is raised. This is beyond kicking a dead horse: The horse has long rotted away, reduced to a skeleton, and yet world leaders keep digging up the skeleton, saddling it, and harnessing it to haul their political agenda.

 

One-State Solution Variants

One problem with the term 'One-State Solution' is that it serves as an umbrella for several fundamentally different solutions with drastically different outcomes. These variants are often conflated when the term is used without qualification, creating a great deal of confusion.

The key issue to keep in mind when evaluating these solutions is the demographic one: Israel currently includes over 7 million Jews and 2 million Arabs. Beyond that, there are roughly 3 million Palestinians in the West Bank, another 2 million in Gaza, and between 5 to 9 million so-called Palestinian 'refugees' outside the region who demand a 'Right of Return'. This yields a potential total of 12 to 16 million Arabs within the region.

The Democratic One-State AKA The Palestinian State

When Arabs, Muslims and pro-Palestinians activists around the world speak of a One-State Solution, they generally mean a simple democratic model where every person gets one vote and the government rules by majority. Furthermore, they insist on the Right of Return. Given that this means an immediate Arab majority, it would be naive to imagine that laws wouldn't be reshaped to allow Palestinian culture and demands to dominate.

The outcome of this model is obvious: At best, this means the end of the Jewish state with a 2-to-1 Arab majority. It means yet another new Arab/Muslim state to join the existing 22 Arab and 50-plus Muslim states around the world, leaving none that are Jewish, with no home that Jews can call their own. It would almost certainly bring renewed waves of anti-Jewish violence and pogroms, with no IDF left to defend Jewish civilians. This is the best-case scenario. Given the barbaric behavior of many  Gazan citizens during the October 7 attack when they joined Hamas to rape and murder families, the worst-case outcome is a second Holocaust. Palestinians on the street have repeatedly stated openly that, if and when they get their state, Jews will not be allowed to stay.

This so-called 'solution' is a nonstarter. When and if Arabs support it, they are fully aware of what they are endorsing. As for the few leftist Israelis who back this scenario, the only plausible explanation is suicidal recklessness born of extreme naivety.

The Israeli One-State

Unlike the previous variant, a minority of sane but impractical Israelis that support the One-State Solution generally envision a variant model where Palestinians are restricted in some way, allowing the state to maintain its primarily Jewish character.

The most common proposed variant is one in which only the West Bank becomes part of Israel, resulting in 7 million Jews and 5+ million Arabs. Thus non-belligerent Arabs would receive full voting rights, but would not constitute a majority. In other words, the state's character would remain largely the same as present-day Israel, albeit with a significantly larger Arab minority. 

Even this seemingly practical solution faces multiple problems: It ignores the Gaza issue. It does not solve the Judea & Samaria (AKA West Bank) security problem. Keep in mind Israel has had military control of the territory since 1967 without achieving peace. To assume Palestinians would magically become peaceful when they get citizenship is naive. Yet another problem is that it doesn't explain what happens to Palestinians that do not wish to become Israeli citizens. It also doesn't address inevitable future demographic changes. And, finally, it doesn't address problems with forming Israeli-friendly coalitions in the government, given that over 40% of votes would come from Arabs.

It almost doesn't matter what percentage of the Arab population would be hostile to Israel after such changes. Even if only 5% were extremist terrorists, the situation could continue much as it has over the past 70 years. Advocates of these solutions overlook the key reason Ariel Sharon disengaged from Gaza: the high cost of controlling, patrolling, and policing hostile territories, and of protecting Jewish civilians—both in terms of lives and money.

Some variants propose giving Arabs more or less autonomy in their regions, but the core problems remain.

Other variations of the Israel single-state solution range from impractical to outright unworkable:

  • Any variation that strips Arab voting rights would constitute an apartheid state, with all the extreme political isolation and pressure that would ensue from such a move. Occupation without voting rights is legal; sovereignty without voting rights is a very different matter.

  • Some have suggested a model akin to Puerto Rico where residents cannot vote federally. Even in that case, Puerto Ricans can move into the U.S. proper and vote, but Arabs in the West Bank could not move into Israel for obvious reasons, making the analogy fundamentally flawed.

  • Any model that includes Gaza would create an instant Arab majority (see 'Democratic One-State' above). On the other hand, excluding Gaza leaves the problem unresolved.

Frankly, many of the Israelis who support these variants are extreme right-wing settlers that are so obsessed with settling more land, they overlook the glaring impracticalities of their proposals.

The Shared One-State 

This final variant of the One-State Solution is less popular but more intriguing: It involves various models that guarantee shared governance and protect freedoms for its diverse constituents. The most interesting feature of this approach is a theoretical mechanism that could potentially ensure balanced governance regardless of election outcomes or demographic shifts, by embedding power-sharing provisions directly into the constitution. In other words, it could secure basic rights and equal representation even for groups that are minorities.

One obvious real-world example of this model is both revealing and discouraging: Lebanon has employed a sectarian power-sharing system since 1943, guaranteeing shared governance and political representation for its Shiite, Sunni and Christian population. Unfortunately, it also vividly illustrates the deep flaws of such a system. One only has to look at the extreme instability of that country, its many wars, fierce factional disputes, prolonged political deadlock, and collapsing economy. If this extreme dysfunction occurs in a society that is nearly 100% Arab, one can only imagine the catastrophic results of attempting a similar shared governance model in a mixed Jewish-Arab state.

One often overlooked historical fact is that, of all the One-State Solution variants we've presented, this model was the only one that was formally placed on the negotiating table and presented to both sides. This occurred in 1939 under the British mandate as part of the White Paper, following the Arab rejection of the Two-State Solution proposed by the Peel Commission and towards the end of the Arab revolt of 1936-9. Unfortunately, in addition to the challenges of this model mentioned in the previous paragraph, the White Paper also included many contentious clauses that restricted Jewish immigration and land purchases in an attempt to placate the angry Arabs. It also included a degree of British control to guarantee functional governance. Unsurprisingly it was rejected by the Jewish leadership. Surprisingly, it was also rejected by the Arab leaders. After this failure, the British would gradually give up on the whole issue and hand it over to the UN in 1947.


The Emirates Plan AKA The Eight-State Solution

This unique solution is intriguing and exciting, if only because it is creative, informed and very different from the tired, failed solutions that have been discussed ad nauseam for the past ninety years. However, it does require adopting a different mindset and understanding the Arab way of thinking. As such, it raises its own set of questions and uncertainties for people like myself who have not studied that mindset, although it does sound plausible and practical.

This plan originated with Dr. Mordechai Kedar, a man who became fluent in Arabic in his youth, served 25 years in IDF military intelligence specializing in Islamic groups, and later spent decades in academia as an expert Arabist. We will soon present a summary of his proposal, but it can be heard directly from him in the following videos: This one offers the best comprehensive presentation I could find,  including answers to several key questions. This one is a shorter executive summary. This video offers an alternate take with additional points of interest.

His plan rests on two key foundations:

  1. Homogeneity and tribal rule lead to stability: Throughout the Arab world, states have only functioned and remained stable when they were homogeneous and ruled by a single family, clan, or tribe. Stability matters, because chaos inevitably breeds terrorist factions. Compare Saudi Arabia, the UAE, and Qatar with Iraq, Syria, or Lebanon, where multiple sects and ethnicities have struggled for dominance.

  2. Clan priorities override national, ideological or religious goals. This is the critical foundation. When Arab states are clan-based, the ruling family's agenda—whatever it may be—takes precedence even over Islamic or nationalist causes. Anyone within their territory who does not conform is tightly controlled, exiled, or eliminated. This is why, for example, the UAE recently had no hesitation in arresting and punishing Muslim extremists who attacked a Jew in a high-profile case. Clans do not allow groups such as ISIS or the Muslim Brotherhood to take root in their kingdoms, as such groups threaten their authority and stability.

Given these two foundations, the solution envisions establishing at least eight emirate-style island mini-states within Israel, one for each major region or city where a single clan has historically dominated. The term 'Eight-State Solution' is somewhat misleading, since it covers only the West Bank; Gaza would require the creation (or re-establishment) of roughly five additional emirates to replace and dismantle Hamas’s control.

These emirates would enjoy a high degree of autonomy, with details to be negotiated individually. They would function as separate, sovereign entities with their own internal security forces responsible for maintaining order within their territories.

A critical restriction, emphasized by Dr. Kedar, is that no emirate should have contiguous land borders with another. Contiguity, he argues, would lead to tribal conflict and instability, creating openings for terrorist factions to emerge. However, the emirates could form federations if they chose, and Israel could facilitate travel between them.

Israel would maintain full sovereignty over the land between these emirates, incorporating any scattered Arab populations in those areas as Israeli citizens, unless they chose to relocate into an emirate. Arabs living in the emirates would hold citizenship and ID cards issued by their own emirate, with the potential option of holding a secondary Israeli ID, subject to future negotiations.

My questions and skepticism concerning this plan are as follows:

  • If clans are still dominant in the West Bank, how did the PLO manage to take over? Also, why don't these clans clamp down on the many terrorists operating within their territories—terrorists whose actions force the IDF to repeatedly raid and damage property and lives there? Presumably, this happened because the clans were stripped of their power, money, and security forces. In other words, the PLO and Hamas have suppressed traditional clan authority with their control and infrastructure, and the clan's power is merely dormant. But this seems slightly uncertain and requires taking a risk.

  • What happens if one of the dominant clans refuses to recognize Israel or is outright hostile to it? Presumably, it would be impossible to replace them. Are all clan leaders friendly to Israel? It didn't seem so back in 1947.

  • In fact, 1947 seems to disprove Kedar's theory altogether, since the PLO and Hamas did not exist then, unless the clans have since changed their minds.

  • What about the expelled or scattered hardline terrorists and ideological militants who refuse to abandon their cause? It's difficult to believe that thousands of lifelong jihadists, steeped in decades of Islamist indoctrination, would simply realign themselves under a clan leadership that cooperates with Israel. Kedar explained that the terrorists in their territories would be controlled or killed, but I am referring to the ones that are expelled or that live between the emirates.

  • Is the current chaos in Gaza truly fixable by this model? Can the clans be re-established despite widespread indoctrination by Hamas?

  • What if Hamas overthrows one of the emirates, throwing its leaders off rooftops, just as they did with the PLO? Wouldn’t that simply return us to square one?

In summary, this is a fascinating proposed solution, but one riddled with several major question marks. Still, it's definitely an idea worth exploring by leaders and policymakers, perhaps through a limited pilot project with a single emirate to test the concept in practice.


Trump's "Riviera Plan" for Gaza

This plan refers to Donald Trump's 2025 idea of encouraging and enabling all Gazans to leave Gaza, allowing the U.S. to convert Gaza into some kind of U.S.-controlled real-estate project. Obviously, this plan is largely tangential to the main topic of this article, as it was meant to address the Gaza war specifically, not the broader Israel-Palestine problem. However, as a corollary, it included a partial solution for the latter problem as well, and in a very creative and unique way, which is why many people were excited about the idea.

We will set aside discussions of the plan's legal aspects, logistical challenges, debates about whether the proposed emigration was intended to be voluntary and how the media may have distorted Trump’s words or intentions. Those issues are outside the scope of this article. I include this plan only to raise one intriguing question: Would the mass emigration of Palestinians actually solve the Israeli-Palestinian problem?

At first glance, it seems obvious that if there are no Palestinians in the area, the problem simply vanishes. But that assumption collapses under scrutiny.

The first relevant question is whether Palestinians would be allowed to return once Gaza is rebuilt. If the answer is yes, then, obviously, the problem isn't solved, only postponed.

An overlooked, deeper and more serious concern, however, is the question of where the Palestinians would emigrate to. Their destination is critical, as it could make things worse than they were. The worst-case scenario would be large-scale relocation to Egypt’s Sinai Peninsula, directly on Israel’s southern border. It's worth remembering that for decades, Egyptian smugglers funneled weapons and goods into Gaza through Sinai. If Palestinians were to settle there, smuggling would become even easier. The worst aspect of this outcome however, is that it would provide terrorists with an even larger border with Israel than they had while in Gaza, effectively giving militants more opportunities for attacks. 

Even if Egypt initially promised to control such activity, it could later claim that it had 'lost control' of Sinai, creating a situation similar to Lebanon, where Hezbollah took military control of the border region with Israel. This expanded border would also make Israel’s position far more complicated politically and diplomatically, since it would now have to strike within the territory of a sovereign state to defend itself.

Other potential destinations, such as Lebanon or Jordan, pose similar risks. Lebanon could once again slide into chaos reminiscent of the 1982 war, while Jordan’s fragile balance could be destabilized. The only feasible solution of this kind would involve relocation to a country not sharing a border with Israel, as when the PLO was exiled to Tunisia between 1982 and 1993, during which time Israel experienced a notable reduction in border terrorism.


The Deradicalization Plan (AKA The Precondition Plan)

This brings us back to the 2003 article mentioned in the introduction where I presented what seemed to be the only viable plan. First, a summary of the article:

The article began by first defining the exact cause of the problem: For a hundred years, the majority of Palestinian civilians—not a minority of extremists—have refused, and continue to refuse, to accept Israel as their neighbor. The extremists are the majority, not the minority. The article backed this claim with hard evidence. This simple but glaring truth explains why every peace initiative has failed to date. The problems and failures persist until today because most people refuse to acknowledge this reality.

Defining and accepting the actual problem is obviously critical to finding a viable solution. Without a definition, all solutions are little more than fantasies that 'solve' a non-existent reality.

If we were trying to reconcile a divorced couple, we would start with the assumption that, at least deep down, some part of them wants to get back together. If this desire were missing, all solutions would never get off the ground, no matter how well thought out they may be. But the opposite is also true: If the will exists, many solutions could work. Where there's a will there's a way. And if the will is missing, then we must create that will, otherwise all efforts will be for naught.

Which is why, once this problem is clearly defined, the only viable solution essentially writes itself: A deradicalization project must be implemented, no matter how long it takes and how many setbacks it faces. Any proposed solution must be preconditioned on the success of such a deradicalization project. Deradicalization is not a 'nice-to-have' side-project or an afterthought while working on a solution, it is the fundamental key to make any plan work, and no solution can be initiated before its completion.

The article outlined several potential implementation strategies. To these we could add that foolproof goals and indicators must be clearly defined. For example, undercover monitors could visit random schools and mosques to record lectures. When antisemitism ceases to exist in these locations for a certain period of time, a milestone has been reached. Perhaps another indicator could be a set percentage of citizens that openly declare their support for the state of Israel in public without fear of repercussions. Other similar measurable milestones could be established.

A critical feature of this plan is that it accounts for failure. In other words, the preconditioned approach ensures that no high-risk state-forming initiative is attempted before the conditions are right. This makes it an easy sell to the Israeli public and to moderate right-wing governments. The article also strongly recommended involving international deradicalization organizations so that the global community can witness the radicalization firsthand. It should be set up so that if it fails, the failure will be international.

Furthermore, the world would then be working on a fresh, genuinely different project—one with a realistic chance of success—rather than endlessly repeating the same failed approach for the tenth time and expecting a different outcome.

As for arguments against this plan, imagine the following exchange:
Pro-Palestine: Your plan is a deception designed to cancel the Palestinian state while pretending to implement it. It sets impossible preconditions that could be invoked at any time to deny Palestinians their goal and to suppress, control and colonize them with foreign entities.
Pro-Israel: Are you saying that asking Palestinians to accept Israel is an impossible goal and that it will never happen in our lifetime? If that's the case, how can you reasonably demand a two-state solution? If Palestinians genuinely want to live side by side with Israel, the deradicalization phase will be completed in no time.

Note that if and when deradicalization is successful, several of the aforementioned failed solutions could actually work. The article hinted that this would be a precondition for some variant of a two-state solution, but the truth is it could work as a precondition for one of the practical one-state solutions as well.

World leaders who have staked their reputation on the Two-State Solution could still frame this approach as an extended version, or name it the 'Preconditioned Two-State Solution', and save face that way.

Having learned of the 'Emirates Plan' after I wrote the article, I must admit it also has the potential to be viable. However, it carries many question marks and significant risks, whereas the Deradicalization Plan greatly reduces—or even eliminates—those risks. On the other hand, the Deradicalization Plan also has significant chances of (risk-free) failure and will take longer. Alternatively, even the Emirates Plan could benefit from a deradicalization precondition, which would help mitigate some of the risks and challenges I highlighted earlier.


Trump's 20-Point Plan

We conclude with a few brief remarks on the current peace plan President Trump is attempting to implement. As with the earlier 'Riviera Plan', its primary goal is to end the Gaza war and solve the Gaza problem, not to address the broader Israeli-Palestinian conflict. Still, it includes a brief mention of a potential final peace framework with which he gained acceptance for the plan from Arab states. In addition, as with the Oslo Accords, it deferred the final details, deliberately leaving the stages ambiguous and open to interpretation. Because of these facts, this plan has only limited relevance to our discussion.

Over the past years, various political leaders have mentioned deradicalization in passing, but it has never been formalized or systematically integrated into any peace initiative, until now. This new 20-Point Plan is somewhat novel and encouraging in that it explicitly incorporates deradicalization as part of the overall framework, and even secured the signatures of several Arab states endorsing such a clause.

Specifically, the first clause states: "Gaza will be a deradicalized terror-free zone that does not pose a threat to its neighbors". Clause 16 adds: "An interfaith dialogue process will be established based on the values of tolerance and peaceful co-existence to try and change mindsets and narratives of Palestinians and Israelis by emphasizing the benefits that can be derived from peace". Qatar, Egypt and Turkey, however, signed a different document in Sharm el-Sheikh which states: "We are united in our determination to dismantle extremism and radicalization in all its forms. No society can flourish when violence and racism is normalized, or when radical ideologies threaten the fabric of civil life. We commit to addressing the conditions that enable extremism and to promoting education, opportunity, and mutual respect as foundations for lasting peace.".

However, the plan does not define when deradicalization will occur, at which stage, and whether any other steps are dependent on its success. The wording—particularly the phrase 'to try'—is far from assertive. Nor does the plan specify who will oversee implementation or what concrete milestones must be met. Because of this vagueness, future administrators could interpret the deradicalization process either as a mandatory preliminary phase or merely as an aspirational principle, and nothing in the document guarantees which interpretation will prevail.

Therefore, the world has taken one small step closer toward the Deradicalization Plan, yet, we are still far from truly adopting it as the foundation of any lasting peace. Setting preconditions is the key.


In conclusion: Many solutions are viable in theory, but only if conditions are right. The Deradicalization Plan remains the only real path forward, the only way to make any solution truly work. Trump's plan is ambiguous enough to allow for this, but this interpretation and outcome are not guaranteed. Peace will only be within reach when the world finally faces reality for what it is. Instead, most world leaders choose to ignore reality and reward radicalization with 'solutions' that only strengthen the extremists.