Though times change, the essence of human experience remains. By tracing the path of the past, we find meaning in the present and glimpse the direction of what is to come
Một suy ngẫm về AI, con người và ý nghĩa của việc 'viết lách' trong thời đại mới
Câu viết của con người vốn không hoàn hảo, nhưng mỗi chữ, mỗi câu đều mang một sức nặng riêng. Nó được hình thành từ cuộc đời của người viết: từ ký ức, trải nghiệm, thành công hay thất bại, những mất mát và hy vọng tích lũy theo thời gian và cuộc đời của chính người viết. Chính vì cuộc sống không hoàn hảo, nên văn chương của con người mới có chiều sâu và tính chân thực.
Ngược lại, các bài văn do AI tạo ra thường rất trơn tru về mặt kỹ thuật. Nó đúng ngữ pháp, mạch lạc, rõ ràng, thậm chí có thể rất đẹp. Nhưng vẫn thiếu một điều cốt lõi. AI không có tuổi thơ, không có nỗi đau riêng, không có sự giằng co nội tâm hay những bài học phải trả giá bằng chính cuộc đời. Vì vậy, dù câu chữ có hoàn chỉnh, chúng vẫn khó mang được trọng lượng cảm xúc như các bài văn, bài viết xuất phát từ trải nghiệm sống.
Con người viết hay máy móc viết? Có lẽ phương pháp tốt nhất là lựa chọn ở giữa: không phải AI thay thế con người, mà là AI đồng hành cùng người viết.
Trong vai trò này, AI trở thành một công cụ mở rộng trí tưởng tượng. Nó có thể gợi ý những liên tưởng mới, tạo ra nhiều biến thể, đưa ra những cách mở đầu hoặc kết thúc khác nhau, và giúp người viết vượt qua những lúc bế tắc. Nhưng quyết định cuối cùng vẫn phải thuộc về con người. Chính hành động lựa chọn, loại bỏ, chỉnh sửa và chịu trách nhiệm cho văn bản mới là điều tạo nên giá trị thực sự của việc viết.
Một hình ảnh ẩn dụ đẹp cho quá trình này là điêu khắc. Michelangelo từng nói rằng bức tượng đã tồn tại sẵn trong khối đá, và nhiệm vụ của người nghệ sĩ chỉ là đục đẻo để loại bỏ phần dư thừa.
AI có thể tạo ra vô số câu chữ, như một khối đá thô thiển. Nhưng ý nghĩa chỉ xuất hiện thông qua hành động của con người: gọt giũa, sửa đổi và lựa chọn. Người viết là người điêu khắc.
Cũng như vậy, AI có thể cung cấp một “khối đá thô”. Nó có thể tạo ra hàng trăm trang văn, hàng trăm cách bắt đầu, hàng trăm cách kết thúc, và vô số hướng đi khác nhau. Nhưng người viết phải như một nhà điêu khắc: cắt bỏ, gọt giũa, phá vỡ và tái cấu trúc, cho đến khi hình dạng thật sự hiện ra.
Nguy hiểm không nằm ở chỗ AI viết quá nhiều, mà ở chỗ con người có thể chấp nhận quá nhanh những gì chỉ “vừa đủ tốt”. Một câu văn trôi chảy đúng văn phạm chưa chắc đã có ý nghĩa. Một đoạn văn mượt mà chưa chắc đã chân thật.
Cách viết bằng AI đã và đang đến, và thực ra đã hiện diện trong cuộc sống của chúng ta. Thay vì phản bác và chống lại, chúng ta cần tìm hiểu và tận dụng khả năng của nó. Học học cách sống cùng nó một cách tỉnh táo. Nhiệm vụ của chúng ta là tận dụng các lợi ích, đồng thời nhận thức rõ những giới hạn của AI.
Cuối cùng, máy móc có thể mang đến nhiều lựa chọn. Nhưng một bài văn, một tác phẩm có ý nghĩa… vẫn cần bàn tay của con người.
A reflection on AI writing, human experience, and the future of authorship
Human writing is imperfect, yet each sentence carries weight. It is shaped by the author’s life: by memories, experiences, successes, failures, losses, hopes, and all the quiet lessons accumulated over time. Life itself is not perfect, and that imperfection gives human writing its depth and authenticity.
In contrast, AI writing is often technically polished. It can be grammatically correct, well structured, fluent, and elegant. Yet something essential may still be missing. AI does not carry childhood memories, personal sorrow, moral struggle, joy, regret, or the scars of lived experience. Its sentences may be smooth, but they do not carry the same emotional weight as sentences written from life.
Perhaps the most meaningful approach lies in the middle: not AI replacing the writer, but AI writing alongside the writer.
In this role, AI becomes a tool for expanding imagination. It can suggest new associations, generate variations, offer alternative openings and endings, and help the writer move through moments of creative blockage. But the final decision must remain with the human being. The act of choosing, rejecting, revising, and assuming responsibility for the text is what gives writing its true literary value.
A beautiful metaphor for this process is sculpture. Michelangelo is often associated with the idea that the sculpture already exists inside the block of stone, and the artist’s task is to remove the excess.
AI can generate endless words, like a block of raw stone. But meaning emerges only through the human act of shaping, refining, and choosing. The writer remains the sculptor.
In the same way, AI can provide the raw material. It can generate hundreds of pages, hundreds of beginnings, hundreds of endings, and countless possible directions. But the human writer must become the sculptor: cutting away, reshaping, breaking apart, and rebuilding until the true form emerges.
The danger is not that AI writes too much. The danger is that we may accept too quickly what is merely good enough. A fluent sentence is not always a meaningful sentence. A polished paragraph is not always a truthful one.
AI writing is coming, and in many ways it is already here. We should not simply resist it. We must understand it, embrace its possibilities, and learn to live with it wisely. The task before us is to maximize its benefits while remaining alert to its pitfalls.
In the end, the machine may give us abundance.
But meaning ... still asks for a human hand.
An essay exploring the future of AI, from past technologies and AGI debates to embodied AI, creativity, consciousness, continual learning, opportunities, and risks.
From past inventions to artificial general intelligence, embodied AI, and the question of consciousness
To predict the future is never easy. The future does not arrive with a clear label on its forehead. It often comes disguised as a toy, a tool, a strange machine, or a small improvement in daily life. Only later do we realize that something fundamental has changed.
From electricity to artificial intelligence: every technology begins as disruption and becomes infrastructure. The question is whether AI will follow the same path or redefine what it means to think, learn, and act.
Electricity, the railway, the telegraph, radio, television, the internet, and the mobile phone all changed human society. At first, each of them created excitement, fear, speculation, and confusion. Some people believed they would transform everything. Others thought they were exaggerated, dangerous, or even useless. Then, slowly, society adapted. What once looked magical became normal.
The electric light became part of the room. The railway became part of travel. The telephone became part of conversation. The internet became part of work, memory, business, and friendship. The mobile phone became almost an extension of the hand. Technology often begins as wonder, then becomes infrastructure.
The question today is whether artificial intelligence will follow the same path. Will AI also become ordinary after some time, fading quietly into the background like electricity and the internet? Or is AI something different?
When Magic Becomes Routine
In many workplaces, AI is already becoming normal. Architects use it to generate visual ideas. Writers use it to improve drafts. Programmers use it to write and review code. Students use it to explain difficult subjects. Businesspeople use it to summarize documents, prepare presentations, and explore ideas.
What felt magical only a short time ago is becoming part of daily work. This is a familiar pattern in the history of technology. A new tool appears. It shocks us. Then it enters routine. Eventually, people stop saying, “This is amazing,” and begin saying, “This is how we work now.”
But AI is not only another tool. A railway does not think about where it wants to go. Electricity does not decide how to use itself. A mobile phone does not form a plan. AI is different because it touches intelligence itself. It does not only extend human muscle, speed, or communication. It begins to extend reasoning.
The Debate About AGI
This is why the debate about artificial general intelligence, or AGI, has become so important. AGI usually means an AI system that can perform many intellectual tasks at or above human level. Some researchers believe this may arrive very soon. Others are more cautious.
Dario Amodei of Anthropic has suggested that AI progress may accelerate quickly because AI can help write code and assist with AI research. In this view, AI may help build the next generation of AI, creating a powerful feedback loop. If the loop closes, progress may become much faster than most people expect.
Demis Hassabis of Google DeepMind is more cautious. He agrees that AI has made remarkable progress, especially in coding and mathematics. But he also points out that science is harder. In science, a good answer is not enough. A theory must be tested. A chemical compound must be made. A physical prediction must be checked against reality.
This is a crucial distinction. Coding and mathematics often have answers that can be verified quickly. Natural science is slower. It requires experiments, instruments, laboratories, time, and sometimes failure. Science is not only calculation. It is also the art of asking the right question.
Human Creativity and Machine Exploration
For now, human creativity remains central. Humans bring intuition, imagination, experience, purpose, and meaning. We do not only solve problems. We decide which problems matter.
But AI may bring another kind of creativity. It may explore possibilities that humans would never consider. A famous example came from AlphaGo, when it defeated Lee Sedol in the game of Go. One move, often remembered as Move 37, puzzled many experts. It looked strange, almost wrong. But it worked. The machine had found a path outside normal human intuition.
This does not mean AI is creative in the same way humans are creative. It means AI may be creative differently. Human creativity grows from life, emotion, memory, and meaning. AI creativity grows from vast exploration. It can search through landscapes of possibility too large for the human mind to walk alone.
The future of scientific discovery may therefore not be “human versus AI.” A better formula may be:
Human intuition + AI exploration = new discovery.
AI may not replace the scientist. But it may become a powerful scientific partner. It can suggest new paths, generate hypotheses, analyze enormous data, and reveal patterns that humans may miss. The human role may shift from doing every step alone to guiding, questioning, testing, and giving meaning to what AI discovers.
Human vs AI vs Human + AI: Creativity & Discovery
Three different ways of exploring the unknown
Human Creativity
Intuition, meaning, experience
Asks meaningful questions
Uses imagination and judgment
Connects discovery to purpose
Limited by habit and experience
AI Exploration
Scale, pattern search, computation
Searches vast possibilities
Finds unexpected patterns
Suggests strange new paths
Lacks human meaning and wisdom
Human + AI Discovery
Intuition guided by machine exploration
Humans ask the right questions
AI explores beyond intuition
Humans test, verify, and interpret
New discoveries become possible
Human insight + Machine exploration = Expanded discovery
The future of science may not be human versus AI, but human imagination working with machine-scale exploration.
AI Comes Out of the Screen
Another important next step is that AI will not remain inside the screen. Today, we mostly meet AI through text, images, voice, and chatbots. We type, and it answers. We ask, and it explains. But this is only the beginning.
Jensen Huang of Nvidia describes AI not merely as software, but as a new infrastructure. AI depends on energy, chips, data centers, cloud systems, models, and applications. In this sense, AI is not floating in the air. It is built on a physical foundation.
The next stage is embodied AI: AI connected to robots, machines, vehicles, laboratories, factories, and physical systems. AI will not only answer questions. It will act. It will move. It will see, touch, measure, repair, build, and assist.
This may be one of the most important changes. Previous tools extended human power. Computers extended calculation. The internet extended communication. AI extends intelligence. Robotics may extend that intelligence into action.
At first, AI was a voice in the machine. Then it became a mind behind the screen. Soon, it may have hands in the world.
From tools that amplify human power to systems that may amplify intelligence itself.
The Question of Consciousness
Then comes a deeper question: does AGI need consciousness?
Intelligence and consciousness are not the same thing. Intelligence is the ability to solve problems, learn, reason, and adapt. Consciousness is subjective experience: the feeling of being aware, the inner sense of “I am.”
Current AI can appear intelligent, but there is no evidence that it is conscious. It can explain sadness without feeling sad. It can write about beauty without experiencing beauty. It can discuss the self without having a self.
This raises a paradox. Humans do not fully understand consciousness. If we do not understand it, how can we intentionally build it?
Perhaps consciousness is not necessary for AGI. A machine may become extremely capable without ever having an inner life. It may solve problems, design medicines, write code, and control robots without feeling anything.
Or perhaps consciousness may emerge from complexity. If a system becomes advanced enough, self-reflective enough, and connected enough to the world, something like awareness may appear. We do not know.
This uncertainty should make us humble. We may build machines that become powerful without being conscious. Or we may one day create something that behaves so much like a conscious being that the boundary becomes difficult to define.
The Missing Piece: Learning After the Cutoff
Another necessary step toward AGI is continual learning. Today’s AI systems are usually trained on large amounts of data and then fixed at a certain point. They may retrieve new information, but they do not truly learn from life in the same way humans do.
Human intelligence is different. We learn after every conversation. We update ourselves after mistakes. We change through experience. We do not have a final cutoff date.
For AI to become truly general, it must learn how to learn. It must be able to adapt after training, absorb new experience, correct itself, and improve over time without losing what it already knows.
This is difficult. If AI learns too freely, it may become unstable. If it learns too little, it remains frozen. If it learns wrongly, it may drift into dangerous behavior. The challenge is to build systems that can grow while remaining safe.
In other words, AGI requires more than knowledge. It requires learning as a living process.
Opportunities and Pitfalls
The opportunities are enormous. AI may help cure diseases, accelerate science, improve education, reduce paperwork, support lonely people, help small businesses, and give ordinary individuals access to knowledge that once belonged only to experts.
But the pitfalls are also real. AI may displace jobs, especially entry-level white-collar work. It may concentrate power in the hands of a few companies or governments. It may be used for manipulation, surveillance, cyberattacks, or weapons. It may make humans passive, dependent, or less willing to think for themselves.
The greatest danger may not be that machines become intelligent. The greater danger may be that humans stop using their own intelligence wisely.
Will AI Become Normal?
So, will AI become normal like electricity, railways, television, the internet, and mobile phones?
In one sense, yes. We will get used to it. Children growing up with AI will not find it magical. They will speak to intelligent systems as naturally as previous generations used search engines or smartphones.
But in another sense, AI may remain different. Electricity gives power. The internet gives connection. AI gives something closer to thought. And when thought becomes a tool, the relationship between human and machine changes.
The future may not be a world where AI replaces humans. It may be a world where humans who know how to work with AI become far more capable than those who do not.
Final Thought
Every great technology carries both light and shadow. The railway connected cities, but also changed landscapes. Electricity illuminated homes, but also powered weapons. The internet opened knowledge, but also spread confusion. AI will be no different.
In Taoist thought, every force contains its opposite. Progress brings danger. Power demands wisdom. Speed requires balance.
The future of AI is not written only in code. It is written in human choices. If we guide AI with wisdom, it may become one of the greatest partners humanity has ever created. If we chase power without responsibility, it may become a mirror of our worst impulses.
The next step of AI is therefore not only technical. It is moral, social, and philosophical. The machine may learn to think faster.
But humanity must learn to become wiser.
References and Notes
The discussion of older technologies becoming normal is inspired by the France 24 transcript, “AI is already getting boring,” which compares AI with electricity, railways, phones, and the internet.
The section on AGI timelines draws on the debate between Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind at the World Economic Forim.
The discussion of AI infrastructure, chips, energy, applications, and embodied AI draws on Jensen Huang’s remarks at the World Economic Forum.
The AlphaGo example refers to DeepMind’s historic 2016 match against Lee Sedol, especially the famous unexpected move often remembered as Move 37.
Series: The Evolution of AI and Programming
This article is Part 3 of a series exploring the foundations of AI and the evolution of programming.
This is a fourt-part tetralogy exploring the evolution of AI and programming:
The first part offers an overview of AI’s development over time.
The second shares my personal journey with computers through programming languages.
The third reflects on my experience with AI, revealing both its remarkable possibilities and its limitations as a companion and thinking partner.
The fourth explores the future of AI, from tools to intelligence: examining AGI, embodied AI, and the delicate balance between innovation, risk, and human wisdom.
I hope you’ll enjoy reading it as much as I enjoyed writing it.
From the origins of artificial intelligence to the next stage of intelligence itself, this series explores how we built AI and how it may transform the way we think, create, and live.
Part 1 — The Quiet Power of Ideas
The story begins with ideas. Long before AI became practical, it existed as theory—waiting for data and computation to bring it to life.
Part 4 — The Future of AI: The Next Stage of Intelligence
An essay exploring the future of AI, from past technologies and AGI debates to embodied AI, creativity, consciousness, continual learning, opportunities, and risks.
From natural language to the boundary of understanding — why humans must remain in the loop
In the previous part of this series, programming reached a new stage: conversation. With modern AI systems, humans no longer need to write every line of code. Instead, they can describe problems in natural language, and machines generate solutions. It feels less like programming and more like dialogue.
Yet every new form of power reveals its own boundary. As we explore this new mode of communication, an important limitation becomes visible: AI does not learn in the same way humans do.
Unlike humans, most AI systems are trained once on large datasets and then deployed. After training, their internal knowledge is fixed. This is often referred to as the “knowledge cutoff.” While the system can generate responses and combine ideas creatively, it does not continuously update its understanding through experience.
This limitation becomes clear in practice. When new events occur—such as recent scientific discoveries or geopolitical developments—the model may not immediately reflect them. Similarly, in languages with less available data, responses may lack nuance compared to those in English. In these cases, the human user may possess more current or context-rich knowledge than the machine.
This reveals an important shift. In earlier stages of computing, the human adapted to the machine, learning its syntax and logic. Today, the machine adapts to human language. But the responsibility for truth, judgment, and context still rests with the human.
The interaction between human and AI is therefore not a replacement, but a partnership. The system can generate ideas, summarize information, and explore possibilities at remarkable speed. But it does not truly understand in the human sense. It does not possess experience, intention, or awareness. It operates through patterns learned from data, not through lived reality.
This distinction is critical. Without recognizing it, there is a risk of over-reliance—treating AI outputs as decisions rather than suggestions. In fields such as medicine, finance, or public policy, such a misunderstanding could have serious consequences. The human must remain “in the loop,” guiding, verifying, and interpreting the results.
At the same time, this limitation offers a reassuring perspective on the future of work. AI will not simply replace humans, nor can it fully take over human roles. Because it lacks real understanding, judgment, and lived experience, it still depends on human guidance. What is more likely is a shift in the nature of work. Tasks that involve repetition or pattern recognition may increasingly be assisted by AI, while human roles evolve toward interpretation, decision-making, and responsibility.
Those who understand both the capabilities and the limitations of AI will therefore remain in demand. The value lies not just in using the tool, but in knowing when to trust it, when to question it, and how to integrate it into meaningful work. In this sense, the future belongs not to AI alone, but to humans who know how to work with it.
At the same time, the limitation of AI also defines its strength. Because it does not rely on personal experience or bias in the human sense, it can process vast amounts of information and identify patterns beyond human capacity. The challenge is not to replace human thinking, but to combine it with machine capability.
We are therefore standing at the edge of a new form of intelligence—not artificial in the sense of imitation, but collaborative in nature. The future may not belong to machines alone, nor to humans alone, but to systems in which both contribute their strengths.
Final Thought
In the early days of computing, humans learned to think like machines. Today, machines begin to respond in human language. Yet understanding still requires judgment, and judgment remains human. AI can extend our thinking, but it cannot replace it. At the edge of intelligence, the most important role is not to ask better questions alone—but to know which answers to trust.
Footnotes
Knowledge cutoff: AI models are trained on data available up to a certain point in time and do not automatically update their internal knowledge afterward.
Training vs. inference: Training is the process of learning patterns from data; inference is the use of those patterns to generate outputs without further learning.
Human-in-the-loop: A design principle where human judgment remains central in decision-making processes involving AI systems.
Language data imbalance: AI performance varies across languages depending on the amount and quality of training data available.
Series: The Evolution of AI and Programming
This article is Part 3 of a series exploring the foundations of AI and the evolution of programming.
A personal journey through programming languages — and how we learned to communicate with computers
Every programmer’s journey is, in some sense, a story about communication. Not communication between people, but between human intention and machine execution. Over the decades, this dialogue has evolved—from rigid instructions written line by line to something closer to conversation. Looking back, the history of programming languages mirrors this gradual shift.
My own journey began with BASIC in high school. It was a simple and accessible language, and for many students, it was the first encounter with programming. Yet BASIC came with its own limitations. The heavy use of GOTO statements often led to what programmers called “spaghetti code”—programs that were difficult to follow, tangled in logic, and hard to maintain. Still, compared to low-level programming such as Assembly language, BASIC was a major step forward. It allowed us to focus less on machine instructions and more on problem-solving.
The next stage introduced more discipline. With languages such as Fortran and Pascal, I learned to structure programs using functions and procedures. Programming was no longer just a sequence of instructions; it became a form of organized thinking. Pascal, in particular, trained the mind to think like a machine. Variables had to be declared explicitly, as if they were boxes storing values. If you wanted to preserve a value, you had to store it before assigning a new one. Every step had to be precise, logical, and ordered.
Then came C—a powerful and efficient language. It offered speed and flexibility, but at a cost. Code written in C could be difficult to read, especially when written by others. Documentation became essential. Without clear explanations, debugging could turn into a long and frustrating process. Finding a small bug might take hours or even days. Yet when the bug was finally found, the sense of satisfaction was unmistakable. It was not just about fixing the code; it was about understanding the system more deeply.
The introduction of object-oriented programming marked another important shift. Languages such as C++ and Java provided tools to manage complexity by organizing code into objects and classes. This approach allowed programmers to model real-world systems more naturally.
A useful way to understand object-oriented programming is through familiar objects in everyday life. Consider a car. To drive it, you only need to know how to use the gas pedal, the brake, and the steering wheel—the “methods” of the car. There is no need to understand the details of the engine or how it works internally—the “data” inside the object. The complexity is hidden, allowing the user to focus on interaction rather than implementation.
This idea of encapsulation is also similar to how a biological cell works. A cell interacts with its environment through inputs and outputs, while its internal processes remain hidden. In the same way, objects in programming expose only what is necessary and keep their internal state protected.
By organizing systems into well-defined objects, object-oriented programming makes debugging and maintenance easier. Problems can be isolated within specific components, making large systems more manageable and robust. The focus gradually moved from writing instructions to designing structures.
Today, we are witnessing yet another transformation. With the rise of AI systems and code-generation tools, programming is entering a new phase. The programmer no longer needs to specify every step in detail. Instead, one can describe the problem in natural language, and the system generates the code. The interaction begins to resemble a conversation rather than a set of commands.
This shift reflects a deeper change in how humans communicate with machines. In the early days, interaction was limited to keyboards and precise syntax. Every character mattered, and every mistake resulted in failure. Over time, interfaces improved—first with structured languages, then with graphical interfaces using the mouse. Now, with AI, communication is moving toward natural language, where intention matters more than syntax.
Looking back, the evolution of programming languages is not just about technology. It is about abstraction—the gradual removal of barriers between human thought and machine execution. Each generation of tools has brought us closer to expressing ideas directly, without having to translate them into the rigid logic of machines.
Yet something important remains. Even as AI systems write code, the responsibility for clarity, correctness, and intent still belongs to the human. Understanding how systems work, how errors arise, and how solutions are structured remains essential. The tools have changed, but the discipline of thinking has not.
Final Thought
From spaghetti code to structured programming, from objects to intelligent systems, the journey of programming reflects a deeper movement: the gradual alignment between human language and machine understanding. In the beginning, we learned to think like machines. Today, machines are beginning to understand us. The future of programming may not be about writing code, but about expressing ideas clearly—so that both humans and machines can bring them to life.
BASIC and “spaghetti code”: Early programming in BASIC often relied heavily on GOTO statements, which could create unstructured and hard-to-maintain code.
Structured programming: Languages such as Pascal and Fortran introduced functions and procedures, encouraging clearer and more modular program design.
C language: Known for its performance and control over system resources, but often criticized for reduced readability and safety compared to higher-level languages.
Object-oriented programming (OOP): A programming paradigm that organizes software design around data (objects) and behavior (methods), emphasizing encapsulation and modularity.
AI-assisted programming: Modern tools can generate code from natural language prompts, shifting the role of programmers from writing syntax to describing intent.
From forgotten theories to global systems — the long arc of technological progress
In every technological revolution, there is a temptation to focus on what is visible: the companies, the products, the rapid scaling of new systems. Yet beneath these visible layers lies something quieter and far more enduring — ideas. The history of artificial intelligence offers a striking example of how foundational scientific thinking often precedes, and ultimately shapes, large-scale technological change.
Long before AI became a commercial force, it existed as a set of abstract ideas. Researchers explored neural networks, learning algorithms, and probabilistic models, often with limited success. The computing power was insufficient, the data scarce, and the results unimpressive. This period, later known as the “AI winter,” saw declining interest and reduced funding. Many moved on to more practical fields such as the internet and mobile communication, which were rapidly transforming society.
Yet a small group of researchers persisted. Figures such as Geoffrey Hinton, Yoshua Bengio, and Yann LeCun (the godfather of deeplearning)continued to develop the theoretical foundations of what would later become deep learning. Their work, at the time, was not driven by immediate application or commercial viability. It was driven by curiosity and belief in the underlying ideas. In retrospect, these efforts laid the intellectual groundwork for one of the most significant technological shifts of the 21st century.
Figure 1. The long road of AI, from early theory to the deep learning era.
The eventual breakthroughof AI did not come from ideas alone. It required the convergence of three essential elements: theory, data, and computation. The rise of the internet and mobile networks generated vast amounts of data, turning human activity into a continuous stream of digital information. At the same time, advances in computing power — from mainframes to cloud-based GPU systems — made it possible to train large-scale models. When these elements aligned, the dormant ideas of earlier decades suddenly became powerful and practical.
Figure 2. Modern AI took off only when ideas, data, and computational power aligned.
This pattern — ideas first, implementation later— is not unique to AI. It reflects a broader dynamic in technological development. Scientific breakthroughs often emerge long before their full implications are understood. Engineering then translates these ideas into usable systems, while capital and scaling bring them to the wider world. Each stage is essential, but they serve different roles. Ideas determine direction; engineering determines feasibility; scaling determines impact.
In today’s AI landscape, much attention is given to the rapid deployment of models and the competitive race among technology companies. While this phase is critical, it should not obscure the deeper origins of the field. The systems now transforming industries are built on decades of research that once seemed impractical or even irrelevant.
There is a quiet lesson here. Technological progress is not always linear, nor is it always visible. Ideas may lie dormant for years, even decades, waiting for the conditions that allow them to flourish. When those conditions arrive, change can appear sudden — but it is, in fact, the result of a long and patient accumulation of knowledge.
Final Thought
In the rhythm of progress, ideas are the hidden roots, and technology is the visible tree. We often admire the branches as they reach into the sky, but it is the unseen roots that determine how far the tree can grow. To understand the future, we must learn to value both — the quiet depth of ideas and the visible force of their realization.
Series: The Evolution of AI and Programming
This article is part of a three-part series exploring the foundations of AI and the evolution of programming.