Christopher T Smith.com
  • Home
  • About Me
  • Leadership
  • Reflections
  • Career Development Research
  • Neuroscience Research
  • Published Research
  • Press
  • Presentations
  • Job Search Resources
  • Funding Resources
  • Subscribe to Newsletter
  • Contact

Reflections Blog

Optimization, Oppression, and Optimism In The AI Age

3/30/2023

0 Comments

 
Future of Work, Opinion
Picture
Resistance is futile. The artificial intelligence revolution is underway. All hail our robot overlords.
Too much to start off a piece on the future of work? Perhaps, but many people have been feeling this way over the last few months. 
Late Fall 2022 was rocked by the public release of ChatGPT, an online chat bot from the company OpenAI that leverages large language models to generate predictive text "responses" to user-entered prompts. The technology has captured the public's attention with over 100 million users of the product within 2 months of launch, the fastest user uptake of an internet-based application/product in history.  ​
And on March 14, 2023, OpenAI announced the launch of the even more powerful GPT-4, which they claim can score in the 85th or higher percentile on the LSAT, SAT, and AP Biology exam. During a livestream demo of the platform (which garnered over 1.4 million views in less than 18 hours), the company showed the power of this next version of their technology, which can perform a range of functions from assisting with writing and troubleshooting computer code to analyzing an image. The demo also highlighted how GPT-4 can take a human-sketched and written design and create website html code or advise on one's taxes (by understanding and acting on the thousand-page US tax code). The range of capabilities and versatility of this model's output is quite astounding! 
Understandably, the release and promotion of ChatGPT, GPT-4, and other "generative artificial intelligence (AI)" products (Meta launched LLaMA in late February 2023 and Google's Bard and Anthropic's Claude both launched in March 2023) is being met with both awe and fear. There is a sense that the current "AI arms race" between companies and governments could lead to the technology outrunning needed safeguards and ethnical discussions around its use. The rapid pace of advancement of this technology has left many to make the case we should slow down on accelerating its deployment, including a statement signed by over 1,000 technology and business leaders urging caution in growing the size of large language models until they are better understood and more regulatory and security guardrails are put in place. 
Even the founder and CEO of OpenAI, Sam Altman, is unsure where this new technology will lead. Though he acknowledges generative AI will displace some human work in the near future, he is hopeful that it will ultimately create better jobs and more fulfillment for humanity. His recent interview with ABC News is shared below and you can watch a longer interview with him and Open AI's Chief Technology Officer here. ​
Versatile AI in a "Black Box"
The consulting firm Gartner has published a report on use cases for generative AI in a variety of industries and sectors. In their report, they highlight how AI could be used to assist in drug design and materials science research, including optimizing the design of various industrial components or semiconductors to maximize a particular use case or efficiency target. In addition, the investment firm ARK's 2023 "Big Ideas" report has made the case that advances in AI are the key catalysts to advancing the development of a variety of innovations from precision therapeutics to robotics and autonomous transportation. 
​The ability for large language models to ingest a large amount of data and have their parameters weighted to achieve certain outputs makes them incredibly versatile and powerful. One interesting twist, however, is that even the developers of generative AI models and interfaces aren't entirely sure how they produce their output. The means by which these models produce data often involves complex neural networks and reinforcement learning approaches that result in very complex "routes" from input to output that are not initiative for a human observer to understand (though some researchers are working to make these neural networks more explainable). Ethical questions have been rightly raised regarding whether bias in these models or the data they were trained on could result in flawed output. 
Picture
And if this output becomes increasingly relied on to aid in consequential decisions (think AI-assisted mortgage determinations), the inability to understand the basis from which the model generates output is problematic. It brings up the question of whether all output, products, or decisions that affect society should be rendered by a model that weights various pieces of input through an incredibly large neural network with hundreds of billions of parameters.
​Can all our problems be solved with more data and more computing power? 
Spoiler alert: No.
​​
"Everyone will have their own white collar personal assistant."
Microsoft founder Bill Gates said as much in a recent blog post about the power and potential of AI. It is not difficult to see how generative AI could be very helpful as an assistant to humans, helping them be more creative and productive.
Thanks to backing of OpenAI by Microsoft, ChatGPT-like technology is being integrated into the Microsoft 365 Suite of business software (Word, Excel, Powerpoint; which they have dubbed "Copilot") and their Bing search product. And recently Zoom announced an AI integration in their video meeting platform. AI as a productivity assistant is upon us. 
And who wouldn't be excited for the day when Outlook or Gmail offers to author responses to your 50+ un-replied-to work emails in a matter of minutes? Let AI handle the boring, administrative things while you focus on other more pressing matters. Though, this does not speak to the complexities around how the human reading your message on the other end feels about it, especially if they know it was written by AI. One could quickly see that this results in some weird future where human beings are not "in the loop" of these digital communications at all. AI written content being "read" by AI models to be responded to with AI and on and on it goes....In this infinite communication loop, what is the point of having a human involved at all? Does someone need to interpret the exchange and act? Could that one day be an AI decision maker (or human decision maker "assisted" by AI)?
​This quick little thought experience brings up the philosophical point around what are human beings for in a knowledge economy that my one day be driven predominantly by AI that is more efficient and effective at a range of data-based tasks? 
Picture
Productivity & Progress
Our modern, 21st-Century American economy is quite fixated on productivity. In fact, many news outlets lament a recent drop in worker productivity, defined by the Bureau of Labor and Statistics as how much total economywide income is generated (i.e., for workers, business owners, landlords, and everybody else together) in an average hour of work. Despite the recent decline in this metric in the past few years, it is inarguable that at a global scale, worker productivity has increased greatly over the past 50 years along with our technological progress.  ​
In theory, increased productivity should lead to increased prosperity, right? This is only true if productivity gains are shared across a society. However, data show that the gap between the average workers' pay and overall productivity in the United States has grown dramatically since the 1980s. While productivity from 1979 until 2021 grew by 64%+, average compensation grew at only a 17% pace. In short, the "returns" generated from increased productivity have not been shared with the average worker in America. This fact shocks no one and reinforces the argument that inequality (in the US and beyond) has accelerated since the early 1980s. 
One open question from these data, then, is whether the gains in productivity and efficiency to come from generative AI will be shared across society or concentrated in the hands of a few? The fact that OpenAI has "open" in its name and has, at least for now, made its ChatGPT technology available to anyone perhaps signals a more egalitarian approach to sharing this technology more equally than many that came before it. It is important to note, though, that while the interface is "open" the source code and details behind the data used to train the model ​are kept carefully under wraps. 
Companies like OpenAI suggest that these GPT technologies will make many workers more productive and efficient. The fact that anyone can access and use ChatGPT would suggest that anyone and everyone can become more productive by using it. This sounds like a great thing but how much can human performance be optimized? If we are talking about optimization, is that something better left to the machines and algorithms anyway?  ​
Picture
​Job Automation 
Studies investigating the effects of robotic automation on industrial and manufacturing jobs found evidence that the deployment of industrial robots reduced both the number of human workers and their wages in these industries. Then came the dawn of generative AI that demonstrated automation of creative and knowledge work, generating stunning visual images and often clever and compelling written words in a matter of a few seconds. This advancement is so new we cannot yet measure its effects but one can easily see that "automation" can replace more than routine, manual manufacturing work.
Just this month, OpenAI and researchers at the University of Pennsylvania released a pre-print (not peer-reviewed) publication titled "GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models". In this study, they assessed occupations based on their correspondence with the current capabilities of the Generative Pre-trained Transformer (GPT) models behind technology like ChatGPT and GPT-4. They found that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs, while around 19% of workers may see at least 50% of their tasks impacted. They go on to state: "The influence spans all wage levels, with higher-income jobs potentially facing greater exposure." 
And just this week (March 26, 2023), Goldman Sachs's Economics Research released a report predicting that two-thirds of US jobs are exposed to automation by AI and going on to state that it believes ~7% of current US jobs could be replaced by AI (which, based on a current US labor force size of 166 million, equates to 11.5+ million people - more than the population of the state of Georgia). The positions at greatest risk according to their report: administrative support, legal positions, and architecture and engineering jobs. Meanwhile, they found the jobs with the lowest exposure to AI automation included those in cleaning and maintenance, installation and repair, and construction. On a more positive note, the Goldman Sachs report believes AI could increase the total value of goods and services created worldwide by 7%. So, while some humans may be rendered redundant by AI, overall value in the economy could be increased, raising the question we posed earlier: who will see the economic benefits of AI and who will bear the costs?
The findings of these studies are perhaps not all that surprising as the world gets a better sense of what GPTs can accomplish. In some ways, administrative and knowledge work is the most automatable, even if there needs to be large advances in current technology to get to a (dystopian?) future where AI has replaced all ​human knowledge work.
The seeds of societal change planted during the COVID-19 pandemic, specifically a demonstration that much knowledge work can be performed remotely, may have also accelerated the ultimate replacement of this type of work by AI. Many white-collar technology workers have resisted the return to office work over the past few years, removing themselves from physical interactions with co-workers and supervisors. In many ways your work output can be dissociated from your humanity when it is transmitted to your employer and customers via electronic means. The employer and coworkers often don't experience a remote worker as fully human but rather pixels on a Zoom screen or text messages on Slack. In a few years time, AI may be able to conjure a digital collection of pixels that mimic a "real" human over video that we won't even be able to tell the difference. For years we have been increasingly dissociating the physical world from the digital one and, thus, replacing pieces of the physical world (ie, human employees) with digital options (AI workers) seems inevitable. The bigger question is how much replacement is possible and ethical.  
Picture
The Future of Work is More Human
As AI becomes increasingly better at producing digital output, including images generated from programs like DALL-E or DreamStudio or computer code from Copilot, it is important to remember that there is still a physical world with many needs and problems that AI cannot yet act on effectively. Some jobs and tasks currently performed by humans will be very difficult if not impossible to automate away. Human skills and professions that emphasize physical interaction and engagement with objects in the world will remain essential as long as we live in a physical world with others (but the creation of a functional metaverse could change that). One could imagine a not-too-distant future where the skilled trades (which are already seeing a renewed interest and level of appreciation in younger generations) gain even more respect from society. A robot is not going to fix your plumbing or electrical issue any time soon. Construction work is another example of work that seems un-automatable.​
Ironically, tasks performed by the "trade professions" that many may view as "routine" but require presence in and manipulation of the physical world are much harder to automate than many futurists would have predicted a few decades ago. And perfecting the autonomous car has been exceedingly difficult even with vast amounts of resources devoted to the effort. Maybe those truck driver jobs are more safe than initially thought? Bottom line: acting on and operating in the physical world is really hard for current AI. ​
Picture
Another large category of work that I think will be very difficult for AI to replace is in the caring fields - think healthcare, childcare, counseling, and even teaching (especially young children). These caring professions involve interacting with human beings and even if one theoretically could automate this type of work, I do not think humanity would be very keen on entrusting the care of its children, elderly, and sick to robots and automated systems anytime soon. Knowing that another human being cares about you and your loved one's well-being is critically important and while generative AI can seem human and caring, it is important to remember that this technology does not have intentions or motives. Current AI literally does not "care" about anything. 
So, while the late 20th Century economy valued data science and computer skills, the future of work will reward those that are handy and humane. Both technical skills that center around operating in the physical world and interpersonal skills that help one work with, understand, and assist other human beings will become increasingly important to thrive in the future. This shift in what types of work is valued could have massive societal implications. 
Richard Reeves from the Brookings Institute has made a case for the growth of HEAL jobs: health, education, administration, and literacy and the fact that many of these professions are only ~25% occupied by men. He speculates that many men have shunned occupations like teaching and social work due to low pay but as labor shortages in these "hands-on" fields continue, demand may push up compensation. This may be even more true as AI replaces careers in software, computer science (some startup companies have already indicated they are using GPT-4 technology to reduce the need to pay human coders for services), finance (a recent pre-print suggests ChatGPT can help you pick a diversified investment portfolio), and others which have been overwhelming held by well compensated men. It is possible that the rise of AI will signal an increased societal importance placed on the HEAL fields. These professions and those of the skilled trades have always been critical to a functional society and it may take the rise of AI for more people to appreciate that point. I think, in the end, that will be a good thing. 
Picture
A Post-Work World?
​Even if the demand for HEAL professions and skilled trades rises in the years to come, the labor force will be forever changed as a result of generative AI technology and sooner than many may think. An interesting wrinkle in all of this is that if it is the knowledge workers whose occupations disappear will there be a larger push by the educated "elite" to support the unemployed and perhaps push for a universal basic income? Sadly, when blue collar workers in the US lost their jobs as a result of deindustrialization and globalization in the 1980s and 1990s many seemed unconcerned. When disruptive AI comes for the careers of journalists, computer programmers, and marketers, those influential individuals will definitely make some noise. Maybe they will be loud enough to help reshape our society to be less focused on work and wages than we have come to understand them? 
While a world without "work" seems almost unimaginable, it is important to point out that this mostly implies a world without humans performing some work. There may come a time soon when work that is tedious, laborious, and often undesirable by people will be replaced by automation, AI, and machines. This would, in effect, free people from performing these tasks. Instead, they may be able to focus instead on activities they enjoy (hobbies or creative pursuits) and that, at least in theory, can be good for society writ large - volunteering, community engagement, and care-giving, among others. 
​Image generation programs like DALL-E (from OpenAI) and DreamStudio (from Stability AI) and the advancement of AI video creation tools would suggest that creative work will also be vulnerable to automation. Alternatively, some have argued these technologies will make humans more creative. Though, we may need less creative types who are paid for their services if AI is doing most of the heavy lifting. Imagine a future where AI can create a unique, customized movie generated to your exact tastes. Would that lead to us amusing ourselves to death or being more glued to our screens?
​In a future of endless content and diversion, will we still seek to make an impact on the world?
Picture
Image generated via DreamStudio using prompt: "robot running past a man in a race, photo art, HQ, 4k"
Work Worth Doing
Far and away the best prize that life has to offer is the chance to work hard at work worth doing. - Theodore Roosevelt
Teddy Roosevelt's quote highlights a big existential question we face as AI is increasingly able to do more of our work: What is work for?
It could be argued that many human beings engage in "work" for meaning. They want to justify to themselves that their life has purpose and impact and this is often reflected in the work they do. The definition of "work" in the future may change, though. While today we think of work as an occupation one performs in exchange for wages to subsist on, perhaps the future of work is work that is more focused on contribution to the betterment of others that is decoupled from any monetary compensation? This idea seems very foreign to many but who says human work has to be about delivering measurable economic value? 
The increasing efficiency and productivity of AI suggests we shouldn't try to compete there. Let AI do what it is good at: optimizing parameters, testing models, and generating creative content. Freed from the need to "produce," humans may be able to think more about what they really care about and how they want to live their lives. Automation and the end of work as we know it may free us from a hyper-capitalist society where all too often our value is measured by what we produce. Instead, a future where AI worries about industrial and knowledge production at scale could free people from thinking of themselves as economic assets or liabilities on society. Rather, we could focus more on what we love to do rather than what we have to do.
This could lead to a place where everyone pursues their passions. We might also choose to focus on tasks and "work" that we know AI can "do better" but that we find fulfillment in doing ourselves. Steven Hales touches on this idea nicely in a recent piece entitled "AI and the Transformation of the Human Spirit" where he makes the point that successful authors still write despite their financial position and technology has made it such that most people don't need to bike 5 miles along an open road on the weekend (a Peloton bike is so much more efficient) or mountain climb but they do it anyway. I can't say it better than he does in the quote below:
When AI lifts the burden of working out our own thoughts, it is then that we must decide what kinds of creatures we wish to be, and what kinds of lives of value we can fashion for ourselves. What do we want to know, to understand, to be able to accomplish with our time on Earth? That is far from the question of what we will cheat on and pretend to know to get some scrap of parchment. What achievements do we hope for? Knowledge is a kind of achievement, and the development of an ability to gain it is more than AI can provide. GPT-5 may prove to be a better writer than I am, but it cannot make me a great writer. If that is something I desire, I must figure it out for myself. - Steven Hales
In addition, free from the need to produce we might be able to re-engage with our fellow humans and be able to have deep and meaningful conversations with others to find common ground and aspirations for our society. To realize that we are all human and deserve basic respect, dignity, and support. We could work to repair our societal and community institutions like schools, civic groups, government agencies, legislative bodies, and so much more with all this new-found time. ​
Picture
In an ironic way, the rise of AI could bring us closer together as a species by better understanding what it means to be human. And while in the interim generative AI has the potential to produce more misinformation and destructive content, we are increasingly realizing that a negative digital world is not what we want. I believe much online negativity and tribalism has been fueled by fear of the world's resources and opportunities being zero-sum...that there is not enough success or money or power to go around. In a future where AI has freed us of the need to "produce to survive" we may be able to evolve past the scarcity mindset that has been a reality for our species from the beginning. It will be a paradigm shift and certainly take time but I believe in the end this technology will produce more abundance for the human race. We will still need to do the human work of engaging with and supporting our fellow man and employing the ethical use of AI in our society to realize the benefits of this optimistic future, together.  
Steven Hales concludes "AI and the Transformation of the Human Spirit" with something we all should think more about in this brave new world we are entering.
We are living in a time of change regarding the very meaning of how a human life should go. Instead of passively sleepwalking into that future, this is our chance to see that the sea, our sea, lies open again, and that we can embrace with gratitude and amazement the opportunity to freely think about what we truly value and why. This, at least, is something AI cannot do for us. What it is to lead a meaningful life is something we must decide for ourselves.
​- Steven Hales
The Future is What We Make It
One fact that I think gets lost in the wonder that is generative AI is that is a product of virtually all of us. Yes, programmers and computer scientists created the neural networks and reinforcement learning models that lead to the ability for the AI to generate output. But it was our collective knowledge contribution via the internet that fed these models with vast amounts of data generated by billions of humans over thousands of years sharing their art, ideas, knowledge, worries, fantasies, hopes, and dreams with the world. Out of these models has come something "human-like" but not necessarily human. We must remember the difference. While generative AI can produce wonders and probably will lead to more productive generation of media to entertain us and knowledge to empower us, it can also be leveraged by bad actors to turbocharge our fears and anxieties. 
This technology is, in a way, a fun-house mirror for our humanity. It reflects back at us surprising, scary, and wonderous things. It could enslave us in a future where we are subjects of the black-box algorithm that strives for efficiency and productivity whatever the costs. Or, it could make us more human and free us from the drudgery of many tasks, leaving us more time to focus on helping and caring for others and being in community with our fellow man.
What will we do with the time AI may give back to us?
​How will we be responsible stewards of a technology capable of immense constructive and destructive impact as it continually improves over the coming months and years?  ​
In the end, we will get out of this technology whatever we collectively feel is most valued and important to us. I hope we choose humanity over optimization and oppression.
​Our future depends on it.
Picture
Dystopian future city image generated with DreamStudio
Picture
Utopian future city image generated with DreamStudio
More from the Blog:
  • To Be Rather Than To Seem
  • The End of Work as We Know It: How an increasingly automated world will change everything (from December 2019)
  • Precarity, Competition, and Innovation: How economic systems and societal structures share our future
For Further Reading
  • AI Is Like … Nuclear Weapons? The new technology is beyond comparison.
  • Big Ideas 2023 from ARK Invest
  • What Have Humans Just Unleashed?
  • Welcome to the Big Blur: Thanks to AI, every written word now comes with a question.
  • Why All the ChatGPT Predictions Are Bogus
  • The Economics of AI
  • The case for slowing down AI
  • Preparing for the (Non-Existent?) Future of Work (Brookings Institute Report)
  • Robots and Jobs: Evidence from US Labor Markets (NBER paper from 2017)
  • Post-work: The radical idea of a world without jobs
  • The Crisis of Social Reproduction and The End of Work
  • ​Redistributive Solidarity? Exploring the Utopian Potential of Unconditional Basic Income
  • Enjoy the Singularity: How to Be Optimistic About the Future
  • How to be a leader in an AI-powered world
Recent Pre-Print and Other Publications on GPTs
  • Predictability and Surprise in Large Generative Models
  • GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
  • Sparks of Artificial General Intelligence: Early experiments with GPT-4
  • Theory of Mind May Have Spontaneously Emerged in Large Language Models
  • GPT-4 System Card
Book Recommendations
  • Broken: How our social systems are failing us and how we can fix them​​
  • ​Futureproof: 9 Rules for Humans in the Age of Automation
Listen to:
  • Andrew Yang's Forward Podcast interview with Kevin Roose on "Futureproofing Your Career in the Age of ChatGPT"
0 Comments

    Author

    A neuroscientist by training, I now work to improve the career readiness of graduate students and postdoctoral scholars.

      Subscribe to Reflections Newsletter

    Subscribe to Newsletter

    Archives

    October 2024
    April 2024
    March 2024
    February 2024
    January 2024
    December 2023
    November 2023
    October 2023
    September 2023
    August 2023
    May 2023
    March 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    June 2022
    May 2022
    April 2022
    March 2022
    January 2022
    December 2021
    October 2021
    September 2021
    August 2021
    July 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    September 2019
    August 2019
    July 2019
    May 2019
    April 2019

    Categories

    All
    Academic Job Search
    Artificial Intelligence
    Career Development
    Career Exploration
    Creativity
    Data Science
    Future Of Work
    Innovation
    International Concerns
    Job Search
    Life Advice
    Neuroscience
    NIH BEST Blog Rewind
    Opinion
    Personalized Medicine
    PhD Career Pathways
    Professional Development
    Scientific Workforce
    Sports
    Tools & Resources
    Welcome

    RSS Feed

Science

Career Development Research
​
Neuroscience Research


Publications

Writing

​Reflections Blog

Other Posts

Press, Resources, & Contact

Press                                                       Contact

Job Search Resources         Funding Resources

Subscribe to Reflections Newsletter 
© COPYRIGHT 2025.
​ALL RIGHTS RESERVED.