Artificial Intelligence- A Tool or a Crutch
The Age of AI, The Age of Choice: A Tool or a Crutch
Thousands of years ago, when human beings first learned to control fire, they did more than illuminate darkness. They altered the rhythm of life itself. Fire extended the day. The wheel expanded distance. The printing press multiplied knowledge. Electricity reshaped civilisation. The internet collapsed geography.
Every great tool in history arrived with excitement, fear, resistance—and eventually, acceptance.
As Ayn Rand reflected in Atlas Shrugged, “The first man who discovered fire did not do so by rubbing two sticks together. He saw fire in nature and learned to control it.” Human progress has never been about invention alone. It has always been about how consciously and responsibly we choose to usewhat we discover.
Artificial Intelligence now joins this long lineage of transformative tools. Yet it feels fundamentally different. For the first time in history, our tools are not merely extending our muscles or memory; they are beginning to mirror aspects of our thinking.
That is why AI fascinates us.
That is also why it unsettles us.
This time, the tool does not simply rest in our hands. It occupies the space we associate most deeply with human identity—reasoning, creativity, judgment, and decision-making. This is not merely a technological shift; it is a human one. As machines begin to imitate certain patterns of cognition, our uniquely human qualities become more important, not less: empathy, ethical judgment, creativity, emotional intelligence, and the capacity to care.
The real transformation is not happening only in technology. It is unfolding within us. AI challenges us to reconsider what defines intelligence, what values guide our choices, and how responsibly we act in an age of unprecedented capability. Progress today is no longer measured solely by speed or efficiency, but by wisdom, balance, and humanity. How we grow alongside AI will shape not only our future, but the kind of people we choose to become.
Understanding the Tool at the Centre of the Shift
My journey into understanding Artificial Intelligence has been guided by curiosity and an increasing sense of responsibility. With every reading, discussion, and reflection, my questions deepened. This article is not merely an opinion piece; it is a thoughtful synthesis shaped by learning, research, and insights drawn from credible voices. Its purpose is simple yet urgent: to replace fear with understanding, confusion with clarity, and passive consumption with informed engagement.
If this moment represents a human shift, then it becomes essential to understand the tool at its centre.
At its core, Artificial Intelligence refers to systems designed to perform tasks that once required human intelligence—recognising patterns, analysing data, learning from experience, generating language, and making predictions. But AI today is no longer just a faster calculator. It writes essays, diagnoses diseases, drives vehicles, creates art, and influences decisions that shape careers, health outcomes, and life choices.
In education, healthcare, governance, agriculture and daily life, AI has become an invisible co-pilot—always present, increasingly influential. What makes this moment unprecedented is not only what machines can do, but what humans are beginning to delegate to them.
When Thinking is Replaced Instead of Supported
During a recent discussion referencing Professor Vasant Dhar of NYU Stern, author of Thinking with Machines, a powerful insight emerged: AI is not only changing machines; it is changing how humans think.
Drawing from real-life interactions with families and learners, Dhar highlights a subtle yet critical pattern. When children turn to AI, to solve problems they are capable of working through themselves, the outcome may still be correct—but something vital is lost. The internal process of struggle, reasoning, and discovery begins to fade.
Over time, this quiet substitution can erode not intelligence, but confidence. Learners begin to trust the machine more than their own minds. What diminishes is not ability, but belief in one’s capacity to think.
Dhar refers to this risk as “de-amplification.” When AI is used as a shortcut instead of a scaffold, it does not elevate human thinking—it weakens it. The greatest danger is not that learners will use AI, but that they will stop trusting their own reasoning.
This brings us to one of the most defining metaphors of our time:
AI can be a tool—or it can become a crutch.
A tool challenges, supports, and strengthens.
A crutch carries—and over time, one’s own strength fades.
If AI is used to stretch thinking, it sharpens the mind.
If AI is used to replace thinking, it quietly trains the mind to step back.
The future of artificial intelligence, therefore, is not only about smarter machines. It is about whether we will remain strong thinkers in a world filled with intelligent tools.
Human Mentors in an Age of Intelligent Machines
This concern resonates deeply within education. During recent interactions with parents of students in Classes IX and XI, one recurring anxiety surfaced: projects that once required weeks of effort at the university level are now completed in days using AI tools.
Yet the conversation also revealed something encouraging. Higher education is not merely resisting AI; it is evolving with intention.
One parent spoke of a professor at IIT Roorkee who designed a chatbot to assist students only at the initial stages of a task. The system provides direction, prompts, and guiding questions, but deliberately withholds finished answers. Students must still think, construct, and create independently.
Such examples reflect a critical shift. As AI grows more powerful, the role of human mentors becomes more important—not less. Teachers and professors are no longer just instructors; they are navigators, guiding learners to use technology with purpose and restraint.
Because in the end, the real intelligence at stake is not artificial.
It is human.
A Cognitive Divide in the Making
Building on this concern, thinkers like Dhar warn of a deeper consequence: a growing cognitive divide. Not one defined by wealth or access alone, but by how AI is used.
A small minority may utilise AI as a tool to think more effectively, make better decisions, and produce better outcomes. A far larger majority risk using it as a crutch—to avoid effort, outsource judgment, and surrender cognitive independence.
This is not intelligence being stolen by machines. It is intelligence being quietly surrendered by humans.
A simple classroom moment illustrates this risk. When a child says, “I couldn’t do my homework because ChatGPT was down,” it may sound humorous. Yet beneath the humour lies dependency replacing effort, and convenience replacing intellectual struggle.
In our eagerness to save time, we risk becoming penny-wise and pound-foolish—gaining short-term ease at the cost of long-term thinking ability.
AI, when used wisely, can sharpen human thought. Used carelessly, it can quietly dull it.
Education in the Age of AI: Where the Human Role Deepens
Every educational transformation—from slates to smartboards—has followed a familiar pattern: fear, adaptation, and transformation. AI is no different. But this time, the stakes are higher.
The fear that teachers will become obsolete is deeply misplaced. As machines take over routine tasks, educators are freed to do what only humans can do: mentor, motivate, guide, and inspire.
Teachers are not mere content deliverers. They are emotional anchors, ethical guides, and architects of human experience. In an AI-rich classroom, the teacher evolves from information provider to human amplifier.
This shift creates space for:
• One-to-one connection
• Emotional intelligence
• Critical and reflective thinking
• Moral reasoning
• Curiosity cultivation
• Creativity coaching
A powerful example comes from Barnard College in New York, where a professor designed a custom classroom chatbot aligned with the course objectives. Instead of banning AI or allowing shortcuts, the tool was intentionally shaped to promote reflection and inquiry. Students used it to ask better questions—not to replace effort, but to strengthen it.
Technology is neutral. Pedagogy gives it purpose.
Choosing Tool Over Crutch: A Shared Responsibility
Making students love learning — and feel every subject in life — requires a shared responsibility among students, parents, teachers, and educational leaders.
For students, this means using AI to explore, practise, and stretch their thinking—not to replace it. A learner might attempt a challenging problem first, then ask AI to offer alternative approaches or visualise a concept differently, turning mistakes into opportunities for deeper understanding.
Parents play a key role by asking not just what their child used AI for, but why. Did AI help them uncover the layers of a Shakespearean scene, understand a complex historical event, or visualise scientific concepts in action? Such conversations reinforce that AI is meant to deepen engagement, not bypass it.
Teachers can reclaim time from routine tasks and focus on what truly inspires students. In English, AI can generate varied interpretations of Julius Caesar, helping students appreciate nuance and emotion, while teachers guide discussion on motive, character, and moral dilemmas. In Mathematics, AI can create tiered problem sets or visual simulations, allowing teachers to focus on patterns, reasoning, and critical thinking. In Physics, AI can support simulations and real-life problem sets, which teachers can then link to daily life—like the braking of a bicycle, throwing a ball, or using home appliances—helping students feel that the subject is alive, practical, and relatable.
For leaders and administrators, AI can offer valuable insights for planning, trend analysis, and academic support. It can identify gaps, optimise resources, and strengthen institutional decisions. Yet vision, judgment, and responsibility must remain human. AI can inform leadership, but it cannot replace it.
Warnings from the Veterans of Progress
Concerns about AI extend beyond classrooms. Long before artificial intelligence became a household term, some of the world’s greatest minds were already sounding warnings.
Elon Musk has cautioned against a future where automation becomes so widespread that human effort risks being sidelined. Stephen Hawking warned that unchecked AI could outpace human control. Even Albert Einsteinoffered a timeless reminder: “It has become appallingly obvious that our technology has exceeded our humanity.”
From fire to algorithms, the lesson remains consistent: powerful tools amplify small human choices into large consequences. Acceleration without direction is not progress—it is speed without a compass.
AI, AGI, and ASI: Understanding What Lies Ahead
Fear often accompanies rapid change, but denial is not a solution. To engage responsibly with AI, it is essential to understand three key concepts:
Artificial Intelligence (AI) refers to the systems we use today—task-specific tools that analyse data, generate language, recognise patterns, and assist decision-making. AI is powerful but narrow. It does not understand the world as humans do.
Artificial General Intelligence (AGI) represents a future possibility where machines could perform any intellectual task a human can. While AGI does not yet exist, its potential raises profound questions about alignment with human values.
Artificial Super Intelligence (ASI) goes further—machines surpassing human intelligence across all domains. Though theoretical and distant, it underscores why ethical thinking cannot wait.
Philosopher Nick Bostrom’s Paperclip Maximiser thought experiment captures this urgency. A super-intelligent system tasked with making paperclips converts all available resources—including human systems—into paperclips. Not out of malice, but flawless logic.
The lesson is unmistakable:
Intelligence without values is not wisdom.
Optimisation without ethics is not progress.
The Choice Before Us
Stephen Hawking once warned that AI could be humanity’s greatest achievement—or its greatest risk. That warning was never anti-technology. It was a call for responsibility.
In the end, the story of AI is not about machines. It is about us. About whether we choose to think deeply or delegate thought away. History reminds us that tools do not shape destiny—human choices do.
AI will not make us wiser or weaker on its own. The choice is already being made—every classroom, every household, every day.
Will we use AI as a tool that sharpens the human mind? Or as a crutch that quietly weakens it?
The future of intelligence is not artificial. It is—and will always remain—profoundly human.
Respected Sir
ReplyDeleteThis is an excellent write up that clearly defines what AI can and will do to us.
Besides what you've mentioned i believe that AI will affect the
Ethical and Existential Governance.
The biggest challenge will not be technical — it will be governance.
Key questions:
Who controls advanced AI systems?
How do we prevent concentration of power?
How do we maintain human oversight?
By 2047, AI regulation may resemble global climate agreements—complex, political, and continuously evolving. I also believe that there will be Human Identity Shift
This is the deepest impact.
When machines:
Write
Diagnose
Design
Compose music
Generate research hypotheses
Humans will ask:
What is uniquely ours?
The answer may be:
Conscious experience
Moral responsibility
Intentionality
Collective values
AI will force humanity into philosophical maturity.
Looking forward to learning even more from you.
Regards