Thoughts on AI at the end of 2025
At this moment, the AI revolution feels more like Google Maps than the invention of the wheel
Let me start by saying that I am using the term “AI” like most people would in 2025. To more specific, AI in this article refers to a mash-up of Large Language Models (LLMs), generative AI (GenAI) not just for language but also for images, videos and audio as well as Agentic AI mediated by transformer-based LLMs. (Read the last one as “Agentic AI mediated by transformer-based LLMs” not just “Agentic AI” since the term “Agentic AI” can mean a lot more that what we are seeing). Let me also start with the end, which is that the AI revolution to me, at this moment, feels more like a “Google Maps moment” rather than an “invention of the wheel” event.
For the most part of 2025, I've been consciously using AI in my work and daily life. I’ve used Gemini for research for work and holiday research. Nano Bananas to generate images for my blogs. ChatGPT to construct a learning companion for my daughter. NotebookLM to do deep dives into technical topics. Taking the agentic AI for research use case further, I’ve also designed a Claude Code agent to extract financial data to conduct market analysis. GitHub Copilot and Claude Code were my tools of choice which I used extensively for coding tasks, both personal and at work. I’ve even tried my hand at writing my own AI agent, just for kicks.
Throughout my experiments I was on a lookout for what makes this latest technology revolution special. I was interested in it because I wanted to find out what would be its impact on not only my career but also how it would impact my daughter’s future.
The first thing I noted is that AI, like any tool, needs to be learnt and practised with in order to derive benefit. I had to learn how to prompt it and how to release information to it step by step. At times, it felt like I was coaching an intern. This is also remarked by many others. Like any intern, they are also prone to missing details and steps. For example, my market analysis agent is able to performs most of the task but keeps missing minor steps here and there.
Next, I found AI a fantastic learning partner on topics that I have some knowledge on. I was trying to learn Active Inference and Variational Message Passing. Solely reading papers was not working very well for me as sifting through and deciphering information from various authors required holding a great deal of information in my head. I was able to use AI as a sparring partner and a knowledgeable tutor. It’s quite another story for things that I’m not too familiar with though. An example is the financial market analysis agent that I was trying to come up with. I was not able to ask questions about whether the analysis that AI was doing is correct. When AI came up with a suggestion, I can’t help but feel suspect.
Another related observation is that AI is great at boilerplate work, especially for coding work. AI saved me a lot of work in navigating the peculiarities of software libraries and frameworks. It got me to a starting point for a new software project really quickly, like setting up the project scaffold and writing tedious API specs and schemas. It was also tremendously helpful in helping me debug infrastructure issues like cloud configurations or deployment script errors. One thing that slightly caught me off guard is how much additional context engineering that is required to get consistent results from AI for software development.
However, when it came to work that required creative thinking or iterative experimentation such as data analysis, I found AI a little lacking. Not lacking in the sense that AI can’t generate outputs. But “lacking” in the sense that, 1) I don’t feel like I’m any wiser after the fact since AI did the thinking for me and 2) for tasks where I didn’t have any “mechanical issues” like for data analysis where I know the tooling and syntax quite well, I felt like I had to do a lot to get the results I would have gotten really quickly. In short, for tasks that the main obstacle was realising the idea I have in my head with tooling that I’m decently well-versed in, AI was not much help for me.
Lastly, the latest AI revolution has changed the man-machine interface and has automated a lot of work that was previously impossible, this has shifted the emphasis of value creation heavily to critical thinking. To paraphrase a big-data age saying, garbage thinking, garbage output. In the software development space, it means more effort is spent designing the infrastructure architecture, application functional modules and evolving the software development lifecycle to help product owners come up with task specifications faster and incorporating a more rigorous review process. AI doesn’t take away effort in thinking and learning fundamental concepts, it emphasises it since the effort of execution has been removed from you.
As I reflect on the points above, the closest historical analogy to the current AI that came to my mind was Google Maps. For those who remember the days pre-Google Maps, people had to call, ask around or fiddle around with paper maps, if they could get their hands on one. Even then, people got lost a lot and waited around for others a lot. Google Maps changed that. Nowadays, I can drop into any decently developed city and navigate my way around. Google Maps also enabled a whole slew of new economic activities like deliveries and small business advertising. It became my go-to tool for finding out where things are and how to get to places. It brought me a great deal of convenience and it would be hard to go back to a state where I don’t have Google Maps. However, it did not simply automate my life away or make the world come crashing down. It did however diminish my ability to navigate without preplanned routes and instructions.
Now, I’m not saying AI is the same as Google Maps (I said “closest analogy”). However I do believe that when the dust settles, people might think of 2025’s AI like how I think of Google Map in 2025. I know it doesn’t feel like it given the frenzy surrounding AI. But like what Howard Marks says, people often alternate between perceiving reality as being extremely hot or cold when it’s more often than not somewhere in between. The current AI is not fully autonomous or deterministic. Neither is it creative without humans. It is however very good at doing well-scoped tasks that were previously “un-automatable” due to pattern recognition requirements that used to be considered “human-level”. Lots of new economy will be built on top of this. Lots of old economy might be destroyed. It will in one shape or another become an indispensable part of everyone’s daily toolkit (though how long this will take is another important question, especially for the markets) and it will shape our behaviour.
But it does not make us less human. In fact, it emphasises the part of us that is uniquely human, our ability to think and create. Yes, there inherently a danger that AI in its current form, wielded by big Tech, might take away people’s ability to think for themselves, case in point, social media. That will be true. But it will also be true that it will force the conversation on the meaning of work and the meaning of creativity on society. And as always, we end up somewhere in the middle.



