The ability for large language models to ingest a large amount of data and have their parameters weighted to achieve certain outputs makes them incredibly versatile and powerful. One interesting twist, however, is that even the developers of generative AI models and interfaces aren't entirely sure how they produce their output. The means by which these models produce data often involves complex neural networks and reinforcement learning approaches that result in very complex "routes" from input to output that are not initiative for a human observer to understand (though some researchers are working to make these neural networks more explainable). Ethical questions have been rightly raised regarding whether bias in these models or the data they were trained on could result in flawed output.
|
Spoiler alert: No.
The seeds of societal change planted during the COVID-19 pandemic, specifically a demonstration that much knowledge work can be performed remotely, may have also accelerated the ultimate replacement of this type of work by AI. Many white-collar technology workers have resisted the return to office work over the past few years, removing themselves from physical interactions with co-workers and supervisors. In many ways your work output can be dissociated from your humanity when it is transmitted to your employer and customers via electronic means. The employer and coworkers often don't experience a remote worker as fully human but rather pixels on a Zoom screen or text messages on Slack. In a few years time, AI may be able to conjure a digital collection of pixels that mimic a "real" human over video that we won't even be able to tell the difference. For years we have been increasingly dissociating the physical world from the digital one and, thus, replacing pieces of the physical world (ie, human employees) with digital options (AI workers) seems inevitable. The bigger question is how much replacement is possible and ethical.
|
When AI lifts the burden of working out our own thoughts, it is then that we must decide what kinds of creatures we wish to be, and what kinds of lives of value we can fashion for ourselves. What do we want to know, to understand, to be able to accomplish with our time on Earth? That is far from the question of what we will cheat on and pretend to know to get some scrap of parchment. What achievements do we hope for? Knowledge is a kind of achievement, and the development of an ability to gain it is more than AI can provide. GPT-5 may prove to be a better writer than I am, but it cannot make me a great writer. If that is something I desire, I must figure it out for myself. - Steven Hales
|
We are living in a time of change regarding the very meaning of how a human life should go. Instead of passively sleepwalking into that future, this is our chance to see that the sea, our sea, lies open again, and that we can embrace with gratitude and amazement the opportunity to freely think about what we truly value and why. This, at least, is something AI cannot do for us. What it is to lead a meaningful life is something we must decide for ourselves.
- Steven Hales |
How will we be responsible stewards of a technology capable of immense constructive and destructive impact as it continually improves over the coming months and years?
Our future depends on it.
- To Be Rather Than To Seem
- The End of Work as We Know It: How an increasingly automated world will change everything (from December 2019)
- Precarity, Competition, and Innovation: How economic systems and societal structures share our future
- AI Is Like … Nuclear Weapons? The new technology is beyond comparison.
- Big Ideas 2023 from ARK Invest
- What Have Humans Just Unleashed?
- Welcome to the Big Blur: Thanks to AI, every written word now comes with a question.
- Why All the ChatGPT Predictions Are Bogus
- The Economics of AI
- The case for slowing down AI
- Preparing for the (Non-Existent?) Future of Work (Brookings Institute Report)
- Robots and Jobs: Evidence from US Labor Markets (NBER paper from 2017)
- Post-work: The radical idea of a world without jobs
- The Crisis of Social Reproduction and The End of Work
- Redistributive Solidarity? Exploring the Utopian Potential of Unconditional Basic Income
- Enjoy the Singularity: How to Be Optimistic About the Future
- How to be a leader in an AI-powered world
- Predictability and Surprise in Large Generative Models
- GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
- Sparks of Artificial General Intelligence: Early experiments with GPT-4
- Theory of Mind May Have Spontaneously Emerged in Large Language Models
- GPT-4 System Card
- Broken: How our social systems are failing us and how we can fix them
- Futureproof: 9 Rules for Humans in the Age of Automation
- Andrew Yang's Forward Podcast interview with Kevin Roose on "Futureproofing Your Career in the Age of ChatGPT"