Steering AI Towards a Human-centered Future
- By Paul Mah
- November 22, 2023
What is the state of AI today? According to AI researcher and Stanford professor Fei-Fei Li, we stand at an inflection point in terms of the incredible progress of the technology and society’s awareness of it.
Speaking at NetApp Insight last month, Li talked about the “incredible progress” in AI. Li is no stranger to the field, having served a stint as the chief scientist of AI/ML at Google, and is considered one of the brightest minds in AI today.
“The ChatGPT moment is where we see how large language models have emergent behaviors, the power that they show. And this is just the first step out of the game; I think there's more coming,” she said.
“There are multimodal large models coming out, and we've seen what GPT-4V is doing. But there's more. There's action models, there's world models, it's a very exciting moment.”
The role of data
What is the impact of data on AI? It's a question NetApp CEO George Kurian asked as he drew attention to Li’s PhD thesis that led to the creation of the seminal ImageNet project. And what lessons might businesses draw from it?
“It's a fundamental theory of machinery that what we want is generalization. We don't really want to memorize [a particular item or object]; what you want is to generalize and recognize similar flowers or plants. And that has to be data-driven.”
“Data has to be a first-class citizen in the world of AI. I think the point of ImageNet is to look at machine learning from an image point of view and bring data into the equation,” said Li.
“It was the first large-scale data effort in AI and we discover one family of models that truly can scale up with the data. And that's the neural network. And that was how I think both ImageNet and neural networks combined to GPUs for the first time. [This] changed the course of AI.”
AI for public good
Why did she agree to serve as an advisor on AI policies for the White House? To Li, this is a matter of responsibility: “It really dawned on me that my generation of technologists, we brought this technology to this world. We share the responsibility of harnessing them, guiding this technology to the human-center to serve the public good.”
While Li is cognizant of the private sector’s contributions to innovation, she also pointed to a growing asymmetry between the public and private sectors when it comes to AI.
“Not a single university today in our country can train a ChatGPT model. I even wonder if all of the university's GPUs combined could train a ChatGPT model,” she said.
“When we have this level of asymmetry, we're doing a disservice to the country because the public sector needs to be part of the leadership would need to be part of the leadership in evaluating and assessing the technology for public good.”
“We need a moonshot mentality for this country. We need to re-invigorate and re-invest in public sector technology, in many ways, starting from a national AI research cloud, but also deeper investment, education, national labs, and other efforts.”
The future of AI
What does the future of work look like in this era of AI? To Li, technology has always played a prominent role in human society.
“I think the relationship between humanity and technology has been a very profound and ancient one. You know, human civilizations have never stopped innovating, and every step of the way, we have to rediscover and redefine our relationship with the tools,” she said.
Li cautioned against the irresponsible use of AI and argued that AI should be used to augment humans, not replace them. However, she acknowledged that jobs will invariably be impacted.
“I think the most important thing is human agency. We are a species of our own agency, and we owe it to each other to be responsible and to protect that agency. My view of AI’s role in our world is to augment humanity to enhance are not taking away our dignity, not taking away our human agency.”
“I know jobs will change. Every profound technology has changed jobs; there will be pain. If we're not careful, we'll have social unrest. If we're not careful, we harm the most vulnerable people, whether they're women, children, people of color, or people from different backgrounds. We have to be careful, but fear-mongering is not the solution. It's not responsible.”
“What's responsible is that we do responsible human-centered AI. We empower this technology to help people, and we empower people to use this technology to do better things. And that's what I think our responsibilities are for business as well as for the public sector,” she summed up.
Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].
Image credit: iStockphoto/KamiPhotos
Paul Mah
Paul Mah is the editor of DSAITrends, where he report on the latest developments in data science and AI. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose.