I’m always annoyed watching horror movies when a character is running down a road being chased by, say, a truck. The victim could give themself a shot by veering off the path to run into the woods, but they stick to the road, with predictable consequences.
AI is the chasing truck—I picture it hauling hundreds of Nvidia GPUs—and we software engineers are the chased. As the big-wheeler bears down on us, we could stay on the road—refuse to use AI, treat our jobs like always, and stubbornly spend our time cataloging ways we are still superior to AIs. If we do, the truck will eventually run some of us down.
Or, we can make a smart pivot off the road, muddy our shoes, and plunge into the forest where the truck can’t follow.
Pivoting doesn’t mean switching careers at a time when the world retains an insatiable appetite for software. Computer-Aided Design offers a reassuring analogy. It freed engineers from tedious, repetitive calculations and painstaking drafting. They could spend more time designing, collaborating, and simulating their designs.
Like CAD, AI copilots are a labor-augmenting technology rather than a labor-displacing one. For a modest amount of money, we can each have a super-speedy intern who helps us with our coding tasks and frees us up to think of other things. If laid off, we can take this assistant to a different company.
Using them can free up time to work on our skill sets, but we need to predict which we’ll still need in a decade or two.
In this post, I propose some heuristics for considering the long-term usefulness of upskills for the AI age. Next, based on those heuristics, I’ll audition and rate a few skill sets, such as user-centric thinking and code review. Finally, I’ll conclude that many more of us will have “hybrid” engineering archetypes in the future, mixing coding with other roles.
Skill-choosing heuristics
How do we think about future-proofing our skillsets?
Prediction is a messy business. As motivation not to screw it up, I’m setting myself a reminder to look back at this post in ten years to see how much I cringe. So, I’m avoiding giving hot takes on hard-to-predict things like whether the scaling hypothesis will continue or when Artificial General Intelligence (AGI) will arrive. I’ll stick to stuff we can reason about today.
Does the skill use training data AIs won’t have?
Code was designed to be written by humans and read by computers. Billions of checked-in lines of code are available for LLMs to peruse in their training data. Once the AI’s GPU has generated the code, it can easily hand it to the CPU to execute. It makes sense that coding ability will be among AI’s first expert-level skills.
If our goal is for AI assistants to complement our skillsets, investing in skills for which AIs of the foreseeable future will lack training data or human feedback seems wise.
Does the skill allow you to be more autonomous?
One of the under-appreciated facts about engineering is that it becomes exponentially harder (and takes exponentially longer) to build increasingly robust systems. Handwaving: it will take, say, twice as long to create a system with 99% reliability than 90% reliability, twice as long again to build a system with 99.9% reliability, and so forth.
If Elon Musk understood this, perhaps he wouldn’t have predicted fully autonomous Teslas in 2017. If fully autonomous cars are to exceed human driving skills, they need to have fewer than one fatal crash in 100 million miles driven. That’s eight nines of reliability. So it’s not surprising that car autonomy features have been around for two decades, but we’re only now beginning to commercialize full-self-driving (FSD) cars in relatively easy environments. We’ve been happy to hear lane-keeping suggestions from our cars because those only need two nines, but relinquishing control is a big deal.
Building skillsets such as delegation, strategic focus, and organization that allow you to operate more independently will thus differentiate you from AIs. Even as we develop and train Agentic AI, it will take decades before we trust them to replace humans who build systems where many nines are required, which is true of most important software applications and infrastructure today.
Put another way: for now, AI will be labor-augmenting rather than labor-displacing, and your ownership skills will be a big reason why.
How much context is needed to solve the problem?
Even if AIs can be trained on the kind of data necessary to solve a problem, they will struggle with novel problems requiring lots of context because the relevant training data and human feedback will be sparser or unavailable.
Even in cases where AI can help, you’ll likely need the surrounding context to articulate it to an AI. Thus, learning vision, strategy, users, and your organization will be paramount. Ask “why” often to place your work within a broader strategy.
Will it help you use AI assistants?
This one may seem obvious, yet many are currently thinking: “Don’t use AI! It’s a trap that will breed complacency and dull our minds!”
That’s an important subject for another day—here, I’ll assume you’ll need and want to use AI to keep up, and embracing that responsibly will help your career.
The auditions
I’ll rate some possible strategies using those heuristics. What will you be most glad you did in 2035?
Participate in coding competitions?
If you’re not yet a fluent programmer, this strategy has some merit. It is not so much a way to gain speed as it is to gain many repetitions of coding to improve your skill. If you’re already a competent coder, practicing your speed to try to stay ahead of the AIs will acquaint you intimately with the big truck.
This strategy doesn’t move the needle on my proposed heuristics. That said, getting jobs at good companies is still a good idea, and until interview loops adjust to the new reality, you’ll at least want to at least brush up on a few leetcode problems before you interview.
Grade: D+
Improve your frequency and depth of code reviews?
Carefully reviewing for correctness, legibility, and maintainability will help you mind your AI assistant’s hastily scrawled code. As AI improves, these concerns might diminish.
AIs will be trained on code reviews, say from GitHub and GitLab. (Though, a lot of engineers are bad at code review, meaning the training data is suspect.) There’s ongoing research into multi-agent collaboration wherein teams of AIs could review one another’s code. But, as I mentioned above, full autonomy will be hideously complex. With no human in the loop, those AIs will also need to be able to avoid security bugs, reliably fix production issues that arise, patch software, and many other tasks.
Enter you.
Moreover, a strong reviewer reviews not only for correctness but also fit for purpose. How will the AI gain enough context about your project to do so?
Grade: A
P.S. Expect more job interviews in which you are asked to review code.
Delve deeper, becoming a domain expert on one particular technology area?
The economic incentives for AI companies to build highly specialized training sets are probably limited for the foreseeable future. To train data for narrow domains, you need expensive specialists. They will focus on larger markets that are easier to collect training data on and cheaper to get human feedback on.
This upskill is a good bet for folks attracted to the Specialist archetype. However, specialists aren’t known for their autonomy or ability to switch jobs in a fast-changing world, so this one is scary to recommend.
Grade: B-
Focus on software design?
Software design often involves unique, bespoke challenges that may not appear in an AI’s training set. That said, it often boils down to design patterns already well-understood by AIs. When asked about problems, AIs will provide candidate patterns, allowing us to choose among them and adapt them to our circumstances. Thus, I would focus less on broad, upfront knowledge of design patterns and more on contextual design skills like strategy, requirements, and prioritization, which, by the way, also allow us to be more autonomous.
Grade: A-
Broaden your skills and become a full-stack engineer?
On the one hand, learning new languages and frameworks is now easier. You can ask coding assistants the right questions to help you quickly ramp up on unfamiliar developer technologies. And if you’re new to a part of the stack, you can brush up on the relevant design patterns.
You’ll become more fearless about what work you tackle, and that will enable you to be autonomous because you can solve important problems rather than look for your keys where the streetlamp is shining.
However, coding assistants are polymaths, too. Everything you learn, they will already know.
Grade: B+
Practice direct human-to-human skills such as collaboration and leadership?
Collaboration skills are timeless and won’t be achieved by AIs anytime soon. AIs aren’t capturing our water-cooler conversations or lurking on everyone’s Zoom calls, and in any case, we have basic human needs to interact with other humans. In addition, every unrecorded interaction we have with a colleague leaves us with something we know that the AIs don’t.
We have our own training challenges here. We need to practice collaborating, which may be challenging if we work remotely. We also need human feedback to improve, which not everybody is willing to give. So, seek collaboration and feedback. Think and read books like Crucial Conversations.
Collaborating with AIs will be particularly useful. The quality of an AI’s code and advice is directly proportional to the clarity of the instructions and the amount of context you give.
Grade: A+
Take on more ownership?
The skill of ownership—of solving problems end to end and figuring out what needs to be done at all times—checks all the boxes:
There is, as far as I can tell, zero training data for AIs on end-to-end ownership of software projects.
Agentic AIs will start to take on small tasks autonomously but under close supervision. The exponential challenge of building extreme agent reliability means we’re decades away from being supplanted here.
Ownership requires gathering all the context.
Skills like breaking problems down into pieces and delegating will help you use AI assistants.
There’s a reason managers highly prize engineers they can trust when delegating large tasks.
Grade: A+
Learn more about your users?
Focusing on your users’ scenarios and needs improves your ability to build the right software, which in turn makes you more autonomous. If your ultimate purpose is to serve users, knowing what they want and need and reasoning about how to provide it should be at the top of your priority list.
The training data story here is interesting. We’ve started recording and transcribing customer interviews, which should be a good source of product insight for LLMs. We can also instrument products to collect human feedback and feed that to the AI. Today, this training set is miniscule.
But in the future, an AI agent could play around with a product in order to develop knowledge of it. Think about having it poke around on the latest version every night. But will it understand enough human psychology to flag safety issues and discoverability gaps and turn those insights into design ideas?
What ultimately sells me on this upskill is that products are even more contextual than software designs. One of the hallmarks of capitalism is that products must be innovative to win in the market. They must carve out a niche nobody has found before, even if they rely on a few standardized designs under the hood. It’s hard to see how an AI could develop a comprehensive training set on a given user base, much less predict a future user base for a new and growing product.
More likely, in the future, you’ll communicate your product’s purpose and deep information about its target audience to an AI to assist you with various design tasks.
Grade: A
The above strategies have traditionally been part of the software engineer’s career ladder, especially at staff+ levels. They are just being reweighted.
But I hope that having predicted the future with at least a small amount of rigor, you’ll have extra motivation to get off the road and more confidence to navigate the forest.
There are enough paths for most people to find something that fits their personality and talents. If you can say, “I’m learning something the machines can’t learn,” “I’m becoming more autonomous,” “I’m gathering strategic context,” or “I’m learning to leverage AI,” it’s likely a good use of your time.
The rise of engineering hybrids
Many of the above skills sound like those of a Product Manager or Engineering Manager. But that doesn’t mean you should drop engineering.
The synthesis between knowledge of the system and other domains will put engineers in unique positions to solve problems. There’s a reason why EMs come from engineering backgrounds—they may not be the best people managers in the world, and they may not be the best engineers in the world, but the combined skillset is highly leveraged.
The same applies to Product/Engineering hybrid Archetypes, who can intuitively seek the best engineering solutions to user problems. I feel this acutely. I recently switched to a PM role, and I already feel less powerful in certain ways due to my lack of deep familiarity with my company’s codebase and architecture. And I wish the engineers around me had better product skills.
Dual-track training will make you harder to replace. I’d wager that other hybrids, such as data science/engineering or design/engineering hybrids, will become more common.
Becoming a hybrid will mean spending less time coding, and yet you’ll be a better pure-play engineer than you are today: that is, you and your assistants, in totality, will be faster and more insightful. You’ll develop more robust code than you could alone in 2023.
Next Post: Users should be part of our training data
This evolution of software engineering will not just be about individual choices. The entire industry must shift. Interviews, education, training, mentorship, and performance evaluation must adapt, led by clear-headed folks who can convincingly predict where AI is going and show how we will build our skills around it.
Product thinking is among the weakest training areas in our discipline. The skills are known to the broader world, but most developers receive only ad-hoc experience. I’ve made it my mission to help inspire and train more Product/Engineering Hybrids, so in my next post (now here), I’ll attempt a start.
Acknowledgements: Big thanks to reviewers Adam Hupp, Ainsley Escorce-Jones, Patrick Dowell, Brian Krausz, and Grammarly for your insights on this post! And thanks to Claude for helping me research.
Thank you for a thoughtful write up! I am looking forward to recommending it to people.
The whole situation with AI and coding reminds me of Advanced Chess where humans compete in chess with full availability of computers. There were two interesting things that came out of that for me. First of all, while computer beats a human in chess, a human-computer pair is much stronger than a computer. Another interesting thing is that a very different archetypes of players dominate advanced chess tournaments: these are lower ranked chess players with stronger engineering backgrounds. The skill seems to be more in high level strategy and delegation as opposed to remembering millions of positions.
Code machine engineers are more like chess grand masters, whereas PM-IC hybrids, are more like people who win the advanced chess tournaments.
Also, chess is very much a “kind environment”, where things are very cleanly scoped and states are discrete. Making products to be used by humans in the presence of competition is very much a “wicked environment”, so AI has a lot of catching up to do before it “beats” a moderately skilled engineer at end-to-end ownership, as you have pointed out.
Great framework and robust run through of key scenarios, Drew! Appreciate sharing these insights.