One might argue that the world doesn’t need more words from Tom Davenport. I’ve written or co-authored 26 or 27 books, 240 or so Harvard Business Review articles, 71 MIT Sloan Management Review articles, and a bunch in Forbes, The Wall Street Journal, etc. All water under the bridge, and you can find them fairly easily if you want to.
But I find that I often need to discuss the most important topic on which I can shed some light, which is what will happen to humans in the Age of AI. None of the other places I typically publish are well-suited to deep or even shallow thoughts on the topic.
I have studied this issue for more than a decade, and co-authored two books squarely focused on it—Only Humans Need Apply and Working with AI, if you must know. But I am still quite uncertain about how the AI vs. humans story will end, or even what some of the middle chapters will look like. Or even whether we are at the beginning, middle, or end of the narrative. Or whether my outlook is optimistic, pessimistic, or just paranoid. Or, most importantly, whether the most likely outcome is large-scale automation or larger-scale augmentation.
I was much more optimistic when Julia Kirby and I wrote Only Humans Need Apply, because it was fundamentally about augmentation—how smart humans and smart machines can find ways to work with each other. I was still optimistic when Steve Miller and I wrote Working with AI, because we were able to come up with 30 chapters (edited down to 29 because our editor didn’t like the one on working with AI as a cancer patient), each about an example of humans already collaborating effectively with AI. We didn’t even have to work that hard to find them.
But now we have rapidly-advancing generative AI, and it’s made me quite paranoid about the future of human employment. In fact I believe that if you are not paranoid about the future of human jobs, you’re simply not paying attention. It’s still true, of course, that artificial general intelligence (AGI) isn’t here yet. It’s still true that virtually every human does multiple tasks in their jobs, and that AI only can do some of them.
However, we’re getting closer to AGI all the time, and AI can do increasing numbers of tasks. I’m particularly worried about entry-level workers, because now AI can carry out a pretty high proportion of the tasks they are typically assigned. I’ve been worried (at a lower level) about this since 2013, when Jeanne Harris and I wrote an article called “Automated Decision Systems Come of Age.” In that piece we commented that “the reality is that there is little need for low-skilled or entry-level employees once automated programs are in place.” We also wrote, “it is also by no means clear where companies will be able to find tomorrow’s experts. As the ranks of employees in lower-level jobs get thinner, companies may find it increasingly difficult to find people with the right kinds of skill and experience to create and maintain the next wave of automated decision systems.”
This week in the New York Times an article appeared that confirmed my fears or at least gave them added weight. Entitled “I’m a LinkedIn Executive. I See the Bottom Rung of the Career Ladder Breaking,” it provided data suggesting that the entry-level labor force is already having a hard time, and it’s likely to become harder. Both AI and economic uncertainty seem to be the driving forces, and it’s not yet clear which is more influential. The article suggested that AI-based code generation is the canary in the coal mine for entry-level workers, and that it’s the reason why hiring for software engineers has slowed considerably.
I had also seen this problem coming when doing research for Working with AI. Several of the AI collaborators we interviewed commented that they weren’t sure that entry level workers would be needed much in the future. None had any answers to the question of how you create experienced workers if you are not hiring inexperienced ones. We wrote about this issue near the end of the book, but my co-author Steve Miller did a good job of portraying the more positive findings about the issue. I wasn’t entirely convinced by them but I thought the book would benefit from some optimism on the issue.
Then just a couple of days ago I was at the MIT CIO Symposium, where I moderated a panel. After it I spoke with one of the attendees, a CIO from a large financial services company. She said that they already weren’t hiring as many people for entry-level roles, particularly in software development. But the same trend was taking place in other parts of the company, and to such a degree that executives are discussing changing the basic organizational structure model. Instead of aiming for a pyramid—lots of lower-level employees, fewer middle managers, relatively small numbers of senior executives—they’re seeing more of a diamond. There will be many fewer entry-level workers, more middle-level experts, and the same number of senior executives.
This diamond model is discouraging enough for entry-level workers. But I asked her my usual question of how the middle of the diamond will be created in the future if the company isn’t hiring many people for the bottom of it. Like everyone else I’ve asked, she didn’t know.
As it happens the next day I participated with several other AI-focused, mid-or-late-career individuals on a high school panel where we discussed what the technology will mean for students by the time they graduate from college. Below are a few things we suggested that might help:
Develop a “digital mindset.” You don’t have to be a programmer—in fact everyone can create at least the first draft of programs now by just saying what you want to a large language model—but you do need to know how AI systems work, what they’re good at, and what they don’t do very well.
Become a subject matter expert at something, anything. The most valuable employees of the future—maybe even the present—are those who understand AI but also have a deep knowledge of supply chain management, marketing, finance, or even English composition.
Never stop learning. AI is changing very rapidly, and you have to keep up with it.
Use AI a lot, but in the right way. Apply the technology to multiple aspects of your life and work to see whether it makes you more productive and effective. Don’t let it rot your brain, but rather try multiple prompts, edit the output, check the citations, etc.
Exercise your critical thinking capabilities. Analytical AI made a prediction for you? Look at the data and the variables in the model and check whether or not they make sense. AI write a paragraph for you? Review it to see whether you could do better.
Who knows if the entry-level workers who take these approaches will be among the relatively few who come in at the bottom of the diamond. But at least you will have more of a chance of getting a job and creating a career than most people. And what will all the other people do who are not inclined to become AI-enabled, heat-seeking missiles? Not saying I know the answer to that question, but I hope to reflect on it later in another Substack post.
I have been thinking about this question lately as well. What I believe is often missed in this discussion is that the scope of entry-level roles will (need to) shift as well along with the technology. If part of the entry-level role has been information acquisition, AI will be able to support with or do that going forward. So, gathering information yourself by sifting through data will no longer be a core responsibility. But reviewing the proposals that AI generates and role playing through different personas or scenarios will all of a sudden become part of it—because time opens up for higher-level work and because the tools and economics to use them are much more capable and favorable than assembling a panel of human experts that an entry-level team member would otherwise need to consult. However, to be prepared for this new and evolving definition of roles, entry-level professionals need to be trained before they enter the workforce. Higher education, training, and application beyond using AI write essays will need to adjust as well. Despite the calculator being available, students still learn addition and subtraction, and even more complex concepts and operations, so they know what digits to enter or how a result comes to be. While the use of GenAI has increased significantly over the past few quarters, students will need to develop the skills to apply the fundamentals of how to interact with AI coupled with a deeper understanding by domain (as you point out, too), so they can perform at the proficiency and quality of a professional with 3-5 years of experience on the first day of their first job. In short: I am convinced that there will be “entry-level” roles for a long time; it’s just a matter of what the scope for entry is and how to get from yesterday to today.
A personal experience with my two college-age children has had me thinking about this topic for a while now as well. I have been asking both of them how often AI is brought up in their classes as something for which they must prepare themselves for, and their answer is virtually never. Even recently the topic of AI discussed in their classes is likely more about cheating.
One child just graduated with a degree in supply chain and analytics. He received two offers upon graduation. The other child is still a college sophomore. For the past several years, I have been asking both of them questions about how their universities are approaching the topic of AI. Sure it gets mentioned now and again, my children acknowledged, but the universities don't appear to have come up with a solution for increasing the awareness of AI to the point where the students are either seeking a solution to the knowledge gap at their university, in their class or major selection, or preparing themselves on their own time. In fact, my children have been a bit skeptical about my AI warnings thinking I have been reading dystopian fiction novels secretly in the basement.
I did my best to help my sophomore select his business major which is now Business Analytics and Information Systems (BAIS) which I felt was an upgrade to some other business majors in regard to this topic. Focusing back on my older child who just graduated - I told him to pay attention to any mention of AI at his new job as supply chain roles can be impacted by AI to a high degree. I told him to get involved in those initiatives, be the leader that bridges the gap to his supply chain knowledge on projects. Check the AI output to verify the quality and find potential pitfalls. In other words, be the person who knows how to leverage AI to be a force multiplier in his role to improve his professional profile.
Finally, I have been recommending books to them that focus on the deployment of AI agents in business, and how it will impact workers. I don't believe either of them have read the recommendations. I am open to any suggestions / recommendations on how to keep the pressure on these two (TED talks, Interviews, books, podcasts, etc.)