Sunday, April 26, 2026

Part 4 - The Future of AI: The Next Stage of Intelligence

An essay exploring the future of AI, from past technologies and AGI debates to embodied AI, creativity, consciousness, continual learning, opportunities, and risks.

From past inventions to artificial general intelligence, embodied AI, and the question of consciousness

To predict the future is never easy. The future does not arrive with a clear label on its forehead. It often comes disguised as a toy, a tool, a strange machine, or a small improvement in daily life. Only later do we realize that something fundamental has changed.

From electricity to artificial intelligence
From electricity to artificial intelligence: every technology begins as disruption and becomes infrastructure. The question is whether AI will follow the same path or redefine what it means to think, learn, and act.

Electricity, the railway, the telegraph, radio, television, the internet, and the mobile phone all changed human society. At first, each of them created excitement, fear, speculation, and confusion. Some people believed they would transform everything. Others thought they were exaggerated, dangerous, or even useless. Then, slowly, society adapted. What once looked magical became normal.

The electric light became part of the room. The railway became part of travel. The telephone became part of conversation. The internet became part of work, memory, business, and friendship. The mobile phone became almost an extension of the hand. Technology often begins as wonder, then becomes infrastructure.

The question today is whether artificial intelligence will follow the same path. Will AI also become ordinary after some time, fading quietly into the background like electricity and the internet? Or is AI something different?

When Magic Becomes Routine

In many workplaces, AI is already becoming normal. Architects use it to generate visual ideas. Writers use it to improve drafts. Programmers use it to write and review code. Students use it to explain difficult subjects. Businesspeople use it to summarize documents, prepare presentations, and explore ideas.

What felt magical only a short time ago is becoming part of daily work. This is a familiar pattern in the history of technology. A new tool appears. It shocks us. Then it enters routine. Eventually, people stop saying, “This is amazing,” and begin saying, “This is how we work now.”

But AI is not only another tool. A railway does not think about where it wants to go. Electricity does not decide how to use itself. A mobile phone does not form a plan. AI is different because it touches intelligence itself. It does not only extend human muscle, speed, or communication. It begins to extend reasoning.

The Debate About AGI

This is why the debate about artificial general intelligence, or AGI, has become so important. AGI usually means an AI system that can perform many intellectual tasks at or above human level. Some researchers believe this may arrive very soon. Others are more cautious.

Dario Amodei of Anthropic has suggested that AI progress may accelerate quickly because AI can help write code and assist with AI research. In this view, AI may help build the next generation of AI, creating a powerful feedback loop. If the loop closes, progress may become much faster than most people expect.

Demis Hassabis of Google DeepMind is more cautious. He agrees that AI has made remarkable progress, especially in coding and mathematics. But he also points out that science is harder. In science, a good answer is not enough. A theory must be tested. A chemical compound must be made. A physical prediction must be checked against reality.

This is a crucial distinction. Coding and mathematics often have answers that can be verified quickly. Natural science is slower. It requires experiments, instruments, laboratories, time, and sometimes failure. Science is not only calculation. It is also the art of asking the right question.

Human Creativity and Machine Exploration

For now, human creativity remains central. Humans bring intuition, imagination, experience, purpose, and meaning. We do not only solve problems. We decide which problems matter.

But AI may bring another kind of creativity. It may explore possibilities that humans would never consider. A famous example came from AlphaGo, when it defeated Lee Sedol in the game of Go. One move, often remembered as Move 37, puzzled many experts. It looked strange, almost wrong. But it worked. The machine had found a path outside normal human intuition.

This does not mean AI is creative in the same way humans are creative. It means AI may be creative differently. Human creativity grows from life, emotion, memory, and meaning. AI creativity grows from vast exploration. It can search through landscapes of possibility too large for the human mind to walk alone.

The future of scientific discovery may therefore not be “human versus AI.” A better formula may be:

Human intuition + AI exploration = new discovery.

AI may not replace the scientist. But it may become a powerful scientific partner. It can suggest new paths, generate hypotheses, analyze enormous data, and reveal patterns that humans may miss. The human role may shift from doing every step alone to guiding, questioning, testing, and giving meaning to what AI discovers.

Human vs AI vs Human + AI: Creativity & Discovery

Three different ways of exploring the unknown

Human Creativity

Intuition, meaning, experience

  • Asks meaningful questions
  • Uses imagination and judgment
  • Connects discovery to purpose
  • Limited by habit and experience

AI Exploration

Scale, pattern search, computation

  • Searches vast possibilities
  • Finds unexpected patterns
  • Suggests strange new paths
  • Lacks human meaning and wisdom

Human + AI Discovery

Intuition guided by machine exploration

  • Humans ask the right questions
  • AI explores beyond intuition
  • Humans test, verify, and interpret
  • New discoveries become possible
Human insight + Machine exploration = Expanded discovery
The future of science may not be human versus AI, but human imagination working with machine-scale exploration.

AI Comes Out of the Screen

Another important next step is that AI will not remain inside the screen. Today, we mostly meet AI through text, images, voice, and chatbots. We type, and it answers. We ask, and it explains. But this is only the beginning.

Jensen Huang of Nvidia describes AI not merely as software, but as a new infrastructure. AI depends on energy, chips, data centers, cloud systems, models, and applications. In this sense, AI is not floating in the air. It is built on a physical foundation.

The next stage is embodied AI: AI connected to robots, machines, vehicles, laboratories, factories, and physical systems. AI will not only answer questions. It will act. It will move. It will see, touch, measure, repair, build, and assist.

This may be one of the most important changes. Previous tools extended human power. Computers extended calculation. The internet extended communication. AI extends intelligence. Robotics may extend that intelligence into action.

At first, AI was a voice in the machine. Then it became a mind behind the screen. Soon, it may have hands in the world.

The Evolution Toward Intelligent Systems

Tools
(Past)
Electricity
Railways
Telegraph
Computation
(Digital Age)
Computers
Internet
Mobile
AI (Today)
(On Screen)
Chatbots
Code Assistants
Knowledge Tools
Embodied AI
(Next Step)
Robotics
Physical Systems
Real-world Action
AGI?
(Future)
Continuous Learning
Creativity
Possible Autonomy
Underlying Layers:
Energy → Chips → Cloud → Models → Applications
From tools that amplify human power to systems that may amplify intelligence itself.

The Question of Consciousness

Then comes a deeper question: does AGI need consciousness?

Intelligence and consciousness are not the same thing. Intelligence is the ability to solve problems, learn, reason, and adapt. Consciousness is subjective experience: the feeling of being aware, the inner sense of “I am.”

Current AI can appear intelligent, but there is no evidence that it is conscious. It can explain sadness without feeling sad. It can write about beauty without experiencing beauty. It can discuss the self without having a self.

This raises a paradox. Humans do not fully understand consciousness. If we do not understand it, how can we intentionally build it?

Perhaps consciousness is not necessary for AGI. A machine may become extremely capable without ever having an inner life. It may solve problems, design medicines, write code, and control robots without feeling anything.

Or perhaps consciousness may emerge from complexity. If a system becomes advanced enough, self-reflective enough, and connected enough to the world, something like awareness may appear. We do not know.

This uncertainty should make us humble. We may build machines that become powerful without being conscious. Or we may one day create something that behaves so much like a conscious being that the boundary becomes difficult to define.

The Missing Piece: Learning After the Cutoff

Another necessary step toward AGI is continual learning. Today’s AI systems are usually trained on large amounts of data and then fixed at a certain point. They may retrieve new information, but they do not truly learn from life in the same way humans do.

Human intelligence is different. We learn after every conversation. We update ourselves after mistakes. We change through experience. We do not have a final cutoff date.

For AI to become truly general, it must learn how to learn. It must be able to adapt after training, absorb new experience, correct itself, and improve over time without losing what it already knows.

This is difficult. If AI learns too freely, it may become unstable. If it learns too little, it remains frozen. If it learns wrongly, it may drift into dangerous behavior. The challenge is to build systems that can grow while remaining safe.

In other words, AGI requires more than knowledge. It requires learning as a living process.

Opportunities and Pitfalls

The opportunities are enormous. AI may help cure diseases, accelerate science, improve education, reduce paperwork, support lonely people, help small businesses, and give ordinary individuals access to knowledge that once belonged only to experts.

But the pitfalls are also real. AI may displace jobs, especially entry-level white-collar work. It may concentrate power in the hands of a few companies or governments. It may be used for manipulation, surveillance, cyberattacks, or weapons. It may make humans passive, dependent, or less willing to think for themselves.

The greatest danger may not be that machines become intelligent. The greater danger may be that humans stop using their own intelligence wisely.

Will AI Become Normal?

So, will AI become normal like electricity, railways, television, the internet, and mobile phones?

In one sense, yes. We will get used to it. Children growing up with AI will not find it magical. They will speak to intelligent systems as naturally as previous generations used search engines or smartphones.

But in another sense, AI may remain different. Electricity gives power. The internet gives connection. AI gives something closer to thought. And when thought becomes a tool, the relationship between human and machine changes.

The future may not be a world where AI replaces humans. It may be a world where humans who know how to work with AI become far more capable than those who do not.

Final Thought

Every great technology carries both light and shadow. The railway connected cities, but also changed landscapes. Electricity illuminated homes, but also powered weapons. The internet opened knowledge, but also spread confusion. AI will be no different.

In Taoist thought, every force contains its opposite. Progress brings danger. Power demands wisdom. Speed requires balance.

The future of AI is not written only in code. It is written in human choices. If we guide AI with wisdom, it may become one of the greatest partners humanity has ever created. If we chase power without responsibility, it may become a mirror of our worst impulses.

The next step of AI is therefore not only technical. It is moral, social, and philosophical. The machine may learn to think faster.

But humanity must learn to become wiser.


References and Notes

  1. The discussion of older technologies becoming normal is inspired by the France 24 transcript, “AI is already getting boring,” which compares AI with electricity, railways, phones, and the internet.
  2. The section on AGI timelines draws on the debate between Dario Amodei of Anthropic and Demis Hassabis of Google DeepMind at the World Economic Forim.
  3. The discussion of AI infrastructure, chips, energy, applications, and embodied AI draws on Jensen Huang’s remarks at the World Economic Forum.
  4. The AlphaGo example refers to DeepMind’s historic 2016 match against Lee Sedol, especially the famous unexpected move often remembered as Move 37.
Series: The Evolution of AI and Programming
This article is Part 3 of a series exploring the foundations of AI and the evolution of programming.

➜ Part 1: The Quiet Power of Ideas
➜ Part 2: From Spaghetti Code to Thinking Machines
➜ Part 3: At the Edge of Intelligence: The Limits of AI as a Thinking Partner

Thursday, April 23, 2026

The Evolution of AI and Programming (Series)

The Evolution of AI and Programming

This is a fourt-part tetralogy exploring the evolution of AI and programming:

  1. The first part offers an overview of AI’s development over time. 
  2. The second shares my personal journey with computers through programming languages. 
  3. The third reflects on my experience with AI, revealing both its remarkable possibilities and its limitations as a companion and thinking partner. 
  4. The fourth explores the future of AI, from tools to intelligence: examining AGI, embodied AI, and the delicate balance between innovation, risk, and human wisdom.

I hope you’ll enjoy reading it as much as I enjoyed writing it.

The tetralogy serie in 4 parts
From the origins of artificial intelligence to the next stage of intelligence itself, this series explores how we built AI and how it may transform the way we think, create, and live.

Part 1 — The Quiet Power of Ideas

The story begins with ideas. Long before AI became practical, it existed as theory—waiting for data and computation to bring it to life.

Read Part 1 →

Part 2 — From Spaghetti Code to Thinking Machines

A personal journey through programming languages, tracing the evolution from early code to natural language interaction with machines.

Read Part 2 →

Part 3 — At the Edge of Intelligence

At the frontier of AI, we encounter its limits. Understanding these boundaries reveals why humans must remain in the loop.

Read Part 3 →

Part 4 — The Future of AI: The Next Stage of Intelligence

An essay exploring the future of AI, from past technologies and AGI debates to embodied AI, creativity, consciousness, continual learning, opportunities, and risks.

Read Part 4 →

Together, these essays form a reflection on progress: how ideas become systems, how systems shape our thinking, and how responsibility remains human.

— A journey in understanding, by Dave Huynh

Wednesday, April 22, 2026

Part 3 - At the Edge of Intelligence: The Limits of AI as a Thinking Partner

From natural language to the boundary of understanding — why humans must remain in the loop

In the previous part of this series, programming reached a new stage: conversation. With modern AI systems, humans no longer need to write every line of code. Instead, they can describe problems in natural language, and machines generate solutions. It feels less like programming and more like dialogue.

Yet every new form of power reveals its own boundary. As we explore this new mode of communication, an important limitation becomes visible: AI does not learn in the same way humans do.

Unlike humans, most AI systems are trained once on large datasets and then deployed. After training, their internal knowledge is fixed. This is often referred to as the “knowledge cutoff.” While the system can generate responses and combine ideas creatively, it does not continuously update its understanding through experience.

At the edge of Intelligen

This limitation becomes clear in practice. When new events occur—such as recent scientific discoveries or geopolitical developments—the model may not immediately reflect them. Similarly, in languages with less available data, responses may lack nuance compared to those in English. In these cases, the human user may possess more current or context-rich knowledge than the machine.

This reveals an important shift. In earlier stages of computing, the human adapted to the machine, learning its syntax and logic. Today, the machine adapts to human language. But the responsibility for truth, judgment, and context still rests with the human.

The interaction between human and AI is therefore not a replacement, but a partnership. The system can generate ideas, summarize information, and explore possibilities at remarkable speed. But it does not truly understand in the human sense. It does not possess experience, intention, or awareness. It operates through patterns learned from data, not through lived reality.

This distinction is critical. Without recognizing it, there is a risk of over-reliance—treating AI outputs as decisions rather than suggestions. In fields such as medicine, finance, or public policy, such a misunderstanding could have serious consequences. The human must remain “in the loop,” guiding, verifying, and interpreting the results.

At the same time, this limitation offers a reassuring perspective on the future of work. AI will not simply replace humans, nor can it fully take over human roles. Because it lacks real understanding, judgment, and lived experience, it still depends on human guidance. What is more likely is a shift in the nature of work. Tasks that involve repetition or pattern recognition may increasingly be assisted by AI, while human roles evolve toward interpretation, decision-making, and responsibility.

Those who understand both the capabilities and the limitations of AI will therefore remain in demand. The value lies not just in using the tool, but in knowing when to trust it, when to question it, and how to integrate it into meaningful work. In this sense, the future belongs not to AI alone, but to humans who know how to work with it.

At the same time, the limitation of AI also defines its strength. Because it does not rely on personal experience or bias in the human sense, it can process vast amounts of information and identify patterns beyond human capacity. The challenge is not to replace human thinking, but to combine it with machine capability.

We are therefore standing at the edge of a new form of intelligence—not artificial in the sense of imitation, but collaborative in nature. The future may not belong to machines alone, nor to humans alone, but to systems in which both contribute their strengths.

Final Thought
In the early days of computing, humans learned to think like machines. Today, machines begin to respond in human language. Yet understanding still requires judgment, and judgment remains human. AI can extend our thinking, but it cannot replace it. At the edge of intelligence, the most important role is not to ask better questions alone—but to know which answers to trust.


Footnotes

  1. Knowledge cutoff: AI models are trained on data available up to a certain point in time and do not automatically update their internal knowledge afterward.
  2. Training vs. inference: Training is the process of learning patterns from data; inference is the use of those patterns to generate outputs without further learning.
  3. Human-in-the-loop: A design principle where human judgment remains central in decision-making processes involving AI systems.
  4. Language data imbalance: AI performance varies across languages depending on the amount and quality of training data available.
Series: The Evolution of AI and Programming
This article is Part 3 of a series exploring the foundations of AI and the evolution of programming.

➜ Part 1: The Quiet Power of Ideas
➜ Part 2: From Spaghetti Code to Thinking Machines
➜ Part 4: - The Future of AI: The Next Stage of Intelligence

Part 2 - From Spaghetti Code to Thinking Machines

 A personal journey through programming languages — and how we learned to communicate with computers

Every programmer’s journey is, in some sense, a story about communication. Not communication between people, but between human intention and machine execution. Over the decades, this dialogue has evolved—from rigid instructions written line by line to something closer to conversation. Looking back, the history of programming languages mirrors this gradual shift.

From Spagetti Code to Thinking Machines

My own journey began with BASIC in high school. It was a simple and accessible language, and for many students, it was the first encounter with programming. Yet BASIC came with its own limitations. The heavy use of GOTO statements often led to what programmers called “spaghetti code”—programs that were difficult to follow, tangled in logic, and hard to maintain. Still, compared to low-level programming such as Assembly language, BASIC was a major step forward. It allowed us to focus less on machine instructions and more on problem-solving.

The next stage introduced more discipline. With languages such as Fortran and Pascal, I learned to structure programs using functions and procedures. Programming was no longer just a sequence of instructions; it became a form of organized thinking. Pascal, in particular, trained the mind to think like a machine. Variables had to be declared explicitly, as if they were boxes storing values. If you wanted to preserve a value, you had to store it before assigning a new one. Every step had to be precise, logical, and ordered.

Then came C—a powerful and efficient language. It offered speed and flexibility, but at a cost. Code written in C could be difficult to read, especially when written by others. Documentation became essential. Without clear explanations, debugging could turn into a long and frustrating process. Finding a small bug might take hours or even days. Yet when the bug was finally found, the sense of satisfaction was unmistakable. It was not just about fixing the code; it was about understanding the system more deeply.

The introduction of object-oriented programming marked another important shift. Languages such as C++ and Java provided tools to manage complexity by organizing code into objects and classes. This approach allowed programmers to model real-world systems more naturally.

A useful way to understand object-oriented programming is through familiar objects in everyday life. Consider a car. To drive it, you only need to know how to use the gas pedal, the brake, and the steering wheel—the “methods” of the car. There is no need to understand the details of the engine or how it works internally—the “data” inside the object. The complexity is hidden, allowing the user to focus on interaction rather than implementation.

This idea of encapsulation is also similar to how a biological cell works. A cell interacts with its environment through inputs and outputs, while its internal processes remain hidden. In the same way, objects in programming expose only what is necessary and keep their internal state protected.

By organizing systems into well-defined objects, object-oriented programming makes debugging and maintenance easier. Problems can be isolated within specific components, making large systems more manageable and robust. The focus gradually moved from writing instructions to designing structures.

Today, we are witnessing yet another transformation. With the rise of AI systems and code-generation tools, programming is entering a new phase. The programmer no longer needs to specify every step in detail. Instead, one can describe the problem in natural language, and the system generates the code. The interaction begins to resemble a conversation rather than a set of commands.

This shift reflects a deeper change in how humans communicate with machines. In the early days, interaction was limited to keyboards and precise syntax. Every character mattered, and every mistake resulted in failure. Over time, interfaces improved—first with structured languages, then with graphical interfaces using the mouse. Now, with AI, communication is moving toward natural language, where intention matters more than syntax.

Looking back, the evolution of programming languages is not just about technology. It is about abstraction—the gradual removal of barriers between human thought and machine execution. Each generation of tools has brought us closer to expressing ideas directly, without having to translate them into the rigid logic of machines.

Yet something important remains. Even as AI systems write code, the responsibility for clarity, correctness, and intent still belongs to the human. Understanding how systems work, how errors arise, and how solutions are structured remains essential. The tools have changed, but the discipline of thinking has not.

Final Thought
From spaghetti code to structured programming, from objects to intelligent systems, the journey of programming reflects a deeper movement: the gradual alignment between human language and machine understanding. In the beginning, we learned to think like machines. Today, machines are beginning to understand us. The future of programming may not be about writing code, but about expressing ideas clearly—so that both humans and machines can bring them to life.



Footnotes

  1. BASIC and “spaghetti code”: Early programming in BASIC often relied heavily on GOTO statements, which could create unstructured and hard-to-maintain code.
  2. Structured programming: Languages such as Pascal and Fortran introduced functions and procedures, encouraging clearer and more modular program design.
  3. C language: Known for its performance and control over system resources, but often criticized for reduced readability and safety compared to higher-level languages.
  4. Object-oriented programming (OOP): A programming paradigm that organizes software design around data (objects) and behavior (methods), emphasizing encapsulation and modularity.
  5. AI-assisted programming: Modern tools can generate code from natural language prompts, shifting the role of programmers from writing syntax to describing intent.

Part 1 - The Quiet Power of Ideas: How AI Was Built Before It Was Scale

From forgotten theories to global systems — the long arc of technological progress

In every technological revolution, there is a temptation to focus on what is visible: the companies, the products, the rapid scaling of new systems. Yet beneath these visible layers lies something quieter and far more enduring — ideas. The history of artificial intelligence offers a striking example of how foundational scientific thinking often precedes, and ultimately shapes, large-scale technological change.

Long before AI became a commercial force, it existed as a set of abstract ideas. Researchers explored neural networks, learning algorithms, and probabilistic models, often with limited success. The computing power was insufficient, the data scarce, and the results unimpressive. This period, later known as the “AI winter,” saw declining interest and reduced funding. Many moved on to more practical fields such as the internet and mobile communication, which were rapidly transforming society.

Yet a small group of researchers persisted. Figures such as Geoffrey Hinton, Yoshua Bengio, and Yann LeCun (the godfather of deeplearning) continued to develop the theoretical foundations of what would later become deep learning. Their work, at the time, was not driven by immediate application or commercial viability. It was driven by curiosity and belief in the underlying ideas. In retrospect, these efforts laid the intellectual groundwork for one of the most significant technological shifts of the 21st century.

Timeline of AI history from foundations to deep learning
Figure 1. The long road of AI, from early theory to the deep learning era.

The eventual breakthrough of AI did not come from ideas alone. It required the convergence of three essential elements: theory, data, and computation. The rise of the internet and mobile networks generated vast amounts of data, turning human activity into a continuous stream of digital information. At the same time, advances in computing power — from mainframes to cloud-based GPU systems — made it possible to train large-scale models. When these elements aligned, the dormant ideas of earlier decades suddenly became powerful and practical.

Triangle diagram showing ideas, data, and compute behind AI breakthrough
Figure 2. Modern AI took off only when ideas, data, and computational power aligned.

This pattern — ideas first, implementation later — is not unique to AI. It reflects a broader dynamic in technological development. Scientific breakthroughs often emerge long before their full implications are understood. Engineering then translates these ideas into usable systems, while capital and scaling bring them to the wider world. Each stage is essential, but they serve different roles. Ideas determine direction; engineering determines feasibility; scaling determines impact.

In today’s AI landscape, much attention is given to the rapid deployment of models and the competitive race among technology companies. While this phase is critical, it should not obscure the deeper origins of the field. The systems now transforming industries are built on decades of research that once seemed impractical or even irrelevant.

There is a quiet lesson here. Technological progress is not always linear, nor is it always visible. Ideas may lie dormant for years, even decades, waiting for the conditions that allow them to flourish. When those conditions arrive, change can appear sudden — but it is, in fact, the result of a long and patient accumulation of knowledge.

Final Thought
In the rhythm of progress, ideas are the hidden roots, and technology is the visible tree. We often admire the branches as they reach into the sky, but it is the unseen roots that determine how far the tree can grow. To understand the future, we must learn to value both — the quiet depth of ideas and the visible force of their realization.


Series: The Evolution of AI and Programming
This article is part of a three-part series exploring the foundations of AI and the evolution of programming.

➜ Part 2: From Spaghetti Code to Thinking Machines
➜ Part 3: At the Edge of Intelligence: The Limits of AI as a Thinking Partner
➜ Part 4: - The Future of AI: The Next Stage of Intelligence

References

  1. Hinton, G., Bengio, Y., & LeCun, Y — Foundational work in deep learning
  2. The concept of “AI Winter” — periods of reduced funding and interest in AI research
  3. Advances in computing power and data availability in the 2000s–2020s

Monday, April 13, 2026

Vietnam’s Economy: Between Momentum and Fragility

A rising manufacturing power faces the harder question: can growth become true prosperity?

For much of the late 20th century, Vietnam was known not for growth, but for escape. The image of “boat people” fleeing hardship defined a nation. Today, the picture could not be more different. Vietnam has become one of Asia’s fastest-growing economies, a magnet for global manufacturers, and a rising node in the world’s supply chains. Yet beneath the surface of this success lies a quieter question: how deep does this growth really go?

Since the Đổi Mới (Renewal) reforms of 1986, Vietnam has transformed itself from a centrally planned system (Communist style) into a market-oriented economy. Income has risen sharply, poverty has fallen, and cities now hum with commercial energy. Per capita GDP, once measured in the hundreds of dollars, has climbed to around $4,000 today.1 The country’s progress has been real, visible not only in statistics but in the everyday rhythm of life: crowded streets, expanding factories, and a population moving with purpose.

The engine behind this rise is outward-facing. Vietnam has become a favored destination for foreign investment, especially as global firms seek alternatives to China. Electronics giants and technology suppliers—from Apple’s manufacturing partners to firms such as Dell, HP, Google, and Microsoft—have shifted production into the country.2 Exports have surged, with electronics now accounting for a large share of total shipments. Trade flows are enormous relative to the size of the economy, and Vietnam has embedded itself deeply in global supply chains.

Vietnam economy transformation: industry, city skyline, and agriculture
Vietnam’s economy: assembly lines, rising skylines, and enduring rural roots — a country balancing industry, trade, and tradition.

But integration is not the same as control. Much of Vietnam’s role remains at the final stage of production. Components—chips, displays, machinery—are imported, often from China, assembled domestically, and then exported to Western markets. This model delivers jobs and growth, but it captures only a modest share of the total value created. Vietnam participates in the system; it does not yet shape it.3

How the Supply Chain Works in Vietnam
China
Produces many key components
such as chips, displays,
machinery, and materials
Vietnam
Assembles imported parts
in factories using local labor
and export-focused production
US / Europe
Imports the finished goods
for consumers, retailers,
and global brands

This is the quiet logic of Vietnam’s export model: China supplies many of the parts, Vietnam assembles the goods, and US / Europe markets absorb the final product

The result is a split economy. Foreign-invested firms dominate exports and drive the fastest growth, while many domestic companies lag behind. One track is global, efficient, and capital-rich. The other is local, smaller, and under pressure. Foreign firms account for roughly three quarters of exports, and their growth has far outpaced that of domestic businesses.4 The headline numbers impress, but they mask an uneven foundation.

Dependence adds another layer of fragility. Vietnam is often seen as a beneficiary of the shift away from China, yet its production still relies heavily on Chinese inputs. In 2025, imports from China reached roughly $186 billion, leaving a large trade deficit.5 Far from replacing China, Vietnam often extends its industrial chain. The system works, but it ties Vietnam’s fortunes closely to forces beyond its control.

Domestic weaknesses are equally pressing. State-owned enterprises remain large but relatively inefficient. The private sector, though dynamic, has yet to fully close the gap. Labor productivity remains low compared with regional peers, and the value captured from manufacturing has not increased in proportion to output. Vietnam is producing more—but not necessarily gaining more.6

Meanwhile, the financial system carries growing risks. Credit has expanded rapidly, much of it flowing into real estate and large conglomerates. By 2025, credit growth approached 19% in a single year, pushing the credit-to-GDP ratio to around 146%.7 In good times, such expansion fuels growth. In weaker conditions, it can amplify shocks. When economic weight is concentrated in a handful of firms, their strength lifts the system—but their weakness can unsettle it.

Even more immediate is a constraint that is less abstract: electricity. Industrial growth depends on reliable power, yet energy supply has struggled to keep pace. In 2025, GDP grew by over 8%, while electricity output rose by less than half that rate.8 Power shortages have already disrupted production, and delays in energy projects continue to stretch the system. For a manufacturing economy, electricity is not simply an input. It is the foundation.

Beyond these structural concerns lies a longer-term challenge. Vietnam is aging quickly, even as it remains relatively poor. Wages are rising, gradually eroding its low-cost advantage. The country now stands at a familiar crossroads: the risk of the middle-income trap. Growth driven by cheap labor and external demand has limits. The next phase—innovation, productivity, and domestic capability—is far more demanding.9

Yet Vietnam’s story is far from predetermined. Its strengths remain formidable: a strategic location, strong global ties, political stability, and a proven capacity to adapt. The shift from textiles to electronics shows that change is possible. The question now is whether the country can move further—into design, technology, services, and higher-value production.

Final Thought

In quieter terms, the challenge is one of balance. Growth has come quickly, almost like a river in flood season. But lasting prosperity requires a slower, deeper current: stronger institutions, capable domestic firms, and a more resilient economic structure. 

In the spirit of Eastern thought, success is not found in speed alone, but in harmony. When a nation learns to balance openness with self-reliance, ambition with restraint, and expansion with stability, its progress becomes not just rapid—but enduring.


Footnotes

1 Vietnam’s post-1986 Doi Moi reforms shifted the country from a centrally planned economy to a market-oriented one, raising GDP per capita from around $300 in the 1980s to roughly $4,000 today.

2 Major global firms and Apple suppliers such as Foxconn and Pegatron have expanded production in Vietnam, alongside companies like Dell, HP, Google, and Microsoft.

3 Vietnam’s manufacturing model focuses largely on assembling imported components—such as chips, displays, and machinery—before exporting finished goods.

4 Foreign-invested companies generate roughly three quarters of Vietnam’s exports, highlighting the dominance of multinational firms in the export sector.

5 Vietnam imported about $186 billion worth of goods from China in 2025, creating a trade deficit of more than $115 billion.

6 Domestic firms lag behind foreign companies in productivity and value creation, while state-owned enterprises remain less efficient.

7 Credit growth reached nearly 19% in 2025, pushing Vietnam’s credit-to-GDP ratio to around 146%, raising concerns about financial stability.

8 In 2025, GDP grew by over 8% while electricity output rose only 4.9%, reflecting constraints in energy supply and infrastructure.

9 Vietnam faces demographic pressures, including rapid aging and rising wages, increasing the risk of falling into the middle-income trap.

Saturday, April 11, 2026

Taoism and Science: Two Languages, One Universe

Modern science measures reality. Taoism learns how to live within it. The distance between them may be smaller than it seems.

For centuries, humanity has tried to understand the world through two very different lenses. One builds instruments, writes equations, and tests hypotheses. The other watches rivers, studies silence, and learns from the way nature moves. We call the first science. We call the second Taoism.


At first glance, they seem worlds apart. Science is precise, analytical, and outward-looking. It builds models, tests them, and refines them through evidence. Taoism, by contrast, is quiet, intuitive, and experiential. It does not try to measure reality. It asks how to live within it. One seeks control through understanding. The other seeks harmony through alignment.

And yet, beneath this contrast, there is a surprising convergence. Not in method, but in attitude. Both begin with the same discipline: they take reality seriously. They do not start with what we wish the world to be. They begin with what is.

Science proceeds by proposing explanations and submitting them to test. A theory survives not because it is elegant, but because it matches observation.1 Taoism proceeds differently. It does not test hypotheses in laboratories. Instead, it refines perception. It trains attention. It encourages a way of living that reduces friction with the natural world.3 Science sharpens the intellect. Taoism softens the will.

The difference is real. Science is a method for knowing. Taoism is a way of being. But when modern science looks deeply into nature, it often finds patterns that feel strangely familiar to Taoist thought.

The first is the idea of interconnected systems. In older scientific thinking, the world was often treated as a machine made of separate parts. Today, that view is giving way to something more subtle. Ecosystems, climate systems, neural networks, and even physical theories describe reality as a web of relationships. Nothing stands alone. Each part depends on others, and small changes can propagate across the whole.

Taoism has long described reality in similar terms. It does not divide the world into isolated units. It sees patterns, flows, and relationships. The world is not assembled. It unfolds. Science maps the network. Taoism experiences it.

The second point of convergence is flow instead of force. Taoism expresses this through the idea of wuwei—acting without unnecessary strain, aligning action with the natural course of events rather than resisting it.4 This is not passivity. It is precision of a different kind: knowing when to act, and how much effort is required.

Science, in its own way, often discovers that efficiency comes from respecting constraints rather than ignoring them. Water follows the path of least resistance. Organisms adapt to their environment. Even well-designed technologies succeed by working with natural laws, not against them. The language is different, but the lesson is similar: force is rarely the most intelligent strategy.

The third resonance lies in complementary opposites. Taoism is built on the dynamic between Yin and Yang—interdependent forces that define and balance each other.5 Light and dark, action and rest, expansion and contraction: each exists in relation to the other. Harmony emerges not by eliminating one side, but by allowing their rhythm.

Modern science, particularly in quantum physics, has uncovered a similar kind of duality at the heart of reality. Light, for example, behaves both as a particle and as a wave, depending on how it is observed.6 This is not a simple contradiction to be resolved, but a deeper truth to be accepted. Reality does not always conform to single, fixed categories. It reveals itself through complementary descriptions.

Yin–Yang and Wave–Particle Duality

Two traditions, two languages, one shared intuition: reality is often expressed through complementary aspects.

☯ Taoism

YinYang

Dark ↔ Light
Receptive ↔ Active
Rest ↔ Motion

Opposites are not enemies. They complete each other.

⚛ Quantum Physics

WaveParticle

Spread out ↔ Localized
Continuous ↔ Discrete
Interference ↔ Detection

Nature can reveal different but complementary aspects.

Shared insight:
Reality may resist simple either–or categories.
Sometimes truth appears as a dynamic balance of two complementary forms.

This does not mean that quantum physics proves Taoism. But it does show that the intuition behind complementary opposites is not merely poetic. At the most fundamental level, nature itself can resist either–or thinking. It asks us to hold two aspects at once, just as Yin and Yang suggest.

This does not mean Taoism is science, nor that science validates ancient philosophy. The distinction remains important. Science depends on measurement, replication, and public verification. Taoism depends on insight, experience, and lived practice. One produces knowledge that can be shared and tested. The other produces wisdom that must be cultivated.

But when placed side by side, they offer something richer than either alone. Science tells us what we can do. Taoism asks whether we should do it, and how. Science expands our reach. Taoism tempers our ambition. Science reveals the structure of the world. Taoism reminds us that we are part of that structure, not outside it.

In an age of accelerating technological power, that distinction matters. The ability to act is no longer our main limitation. The challenge is learning how to act without destabilising the systems we depend on. Knowledge without balance becomes force. Power without restraint becomes risk.

Final Thought

Maybe this is the bridge:

Science asks: “How does the world work?”
Taoism asks: “How should we move within it?”

Put together, they form something powerful:

Understanding… and harmony.

And perhaps that is why your intuition feels right.
Not because Taoism is science—
but because both are, in their own way,
listening carefully to the same universe.


References

1 Encyclopaedia Britannica, “Scientific Method,” describes how science builds and tests models through observation and experiment.

3 Stanford Encyclopedia of Philosophy and Encyclopaedia Britannica, “Taoism,” describe Taoism as a philosophical tradition focused on harmony with nature and the Dao.

4 Encyclopaedia Britannica, “Wuwei,” defines it as action aligned with the natural course of things rather than forced intervention.

5 Encyclopaedia Britannica, “Yin and Yang,” explains them as complementary and interdependent forces forming balance in the cosmos.

6 Encyclopaedia Britannica, “Wave–Particle Duality,” explains that light and matter can exhibit both wave-like and particle-like properties depending on experimental conditions.

Part 4 - The Future of AI: The Next Stage of Intelligence

An essay exploring the future of AI, from past technologies and AGI debates to embodied AI, creativity, consciousness, continual learning, ...