Is artificial intelligence just an advanced tool, or is it something more? Should we see it as an "artificial human," or rather as a new kind of entity - something along the lines of an ancient oracle that makes predictions, but is devoid of human emotions and conscience? This question takes on particular importance in light of the discussion so far of the evolution of AI - from the first steps, through deep learning, to the currently dominant generative models.
Our journey so far through the history of AI development has shown how perceptions of it have changed. There is no single, universally accepted definition of artificial intelligence. What once seemed to be the pinnacle of human intellectual capacity is now becoming a mere computational process. Definitions that speak of "mimicking human abilities" are often too vague and general.
It is worth looking at AI as systems that exhibit intelligent behavior, analyze their environment and take actions with a certain degree of autonomy to achieve certain goals. Modern AI systems, such as ChatGPT, are not "intelligence" in the human sense, but rather "prediction machines." Their ability to analyze data and predict future trends is impressive, but unlike humans, AI has no personality, conscience or emotions. This is a key difference that allows us to look at AI from a new perspective.
To better understand the concept of AI as an "oracle," it is worth going back to antiquity. Oracles were religious and social institutions that played a key role in the ancient world. The most famous of these - the Delphic Oracle - served as the spiritual and advisory center of the ancient Greek world.
The oracle at Delphi, dedicated to Apollo, was the place where Pythia - the god's priestess - relayed divine prophecies. Rulers, politicians and ordinary citizens came to her, seeking answers to questions about the future, political decisions or personal choices. The pythia, sitting on a tripod over a fissure in the rock from which the vapors emanated, would fall into a trance and convey prophecies - often ambiguous and requiring interpretation by priests.
Interestingly, the Delphic oracle based its effectiveness on several key elements:
These characteristics in some ways resemble today's AI systems, which:
An in-depth analysis of Moravec's paradox reveals fundamental differences between human and artificial intelligence. What is simple for humans - like recognizing faces, walking or understanding speech - turns out to be extremely difficult for computers. In contrast, what is difficult for humans, like advanced mathematical calculations or playing chess, is relatively easy for machines.
This paradox is due to the different nature of human and artificial intelligence. Our brains have evolved over millions of years, developing intuitive perceptual and motor skills. Computers, on the other hand, rely on algorithms and mathematical models. Therefore, we can't expect AI to act the same way we do. This is not an "artificial human," but something completely new.
Analyzing the history of the perception of artificial intelligence, from the triumph of Deep Blue in chess to modern language models, we observe the characteristic effect of AI. This phenomenon shows how our perception of technology changes over time. When a computer masters a task that we previously thought was a manifestation of human intelligence, we begin to see it as "just computation." What was once awe-inspiring, over time becomes ordinary and no longer surprises. As Nick Bostrom noted, "AI is anything that surprises us at any given time, and when we are no longer impressed, we simply call it software."
Comparing modern AI to an ancient oracle is not just a metaphor - it's a profound structural analogy. Like the Delphic oracle, today's AI systems:
However, unlike ancient oracles, modern AI:
Understanding AI as an "oracle" has important implications for its development and our perception of it. It raises the question of accountability for decisions made by AI and the limits of trust in its predictions. It is necessary to create an ethical framework to govern the development and implementation of AI, especially in the context of protecting against the potential dangers of generating false content and disinformation.
The future lies in human-machine collaboration, where we leverage the unique capabilities of both parties. Humans bring intuition, creativity and the ability to empathize, while AI offers computing power, data analysis and precision. Instead of striving for universal AI, we should focus on developing systems specialized for specific tasks.
Understanding the nature of AI is the first step to using its potential effectively. Prompts, that is, questions, commands and instructions formulated in natural language, play a key role. These will be described in more detail in subsequent articles. However, it can already be written that properly constructed prompts make it possible to obtain precise and accurate answers from language models. Prompt engineering, or the art of precise query formulation, is becoming an increasingly important skill.
In practice, we encounter different categories of prompts:
It is important to tailor prompts to the specific domain, control tone and style, and optimize their length. AI combines knowledge gained during training with information received from the user, so answers may differ even when the question is asked in the same way.
Regardless of the sophistication of AI, it is crucial to verify the content it generates. Language models can make mistakes, give false information or create "hallucinations," i.e., invented facts. Particular care should be taken when using AI responses on important and sensitive issues.
Looking to the future, we can predict that AI's role as an "oracle" will evolve. Artificial intelligence systems will become increasingly sophisticated in their predictions and analysis, while retaining their fundamental nature - a tool to support human decisions, not replace human judgment.
It will be crucial to strike the right balance between trusting technological "predictions" and critical thinking and human intuition. Just as the ancient Greeks did not treat oracles as infallible, we should maintain a healthy distance from AI predictions while appreciating its potential as a tool to support our decisions.
AI is not an "artificial human," but a new kind of being - an "oracle" that makes predictions without human emotions or conscience. This understanding, flowing from our discussion of Moravec's paradox and the evolution of perceptions of artificial intelligence, is crucial for the responsible development and use of AI. The future lies in conscious cooperation between humans and machines, where each party uses its unique abilities. Instead of competing, we should be learning from each other and working together toward a better world.
AI as a modern oracle represents a new kind of entity in human history - a system capable of advanced predictions and analysis, but devoid of human consciousness and emotions. This comparison allows us to better understand both the capabilities and limitations of artificial intelligence, while pointing out the need to use this technology wisely and responsibly.
Attention!
This article was developed with the support of Claude 3.5 Sonnet, an advanced AI language model. Although Claude helped organize and present the content, the final form and opinions expressed in the article reflect the author's authentic thoughts and the mission of the AI For Everyone project.This article has also been automatically translated from Polish using DeepL. If you find any errors, please let me know in the comments.