Driving the AI Promise With Trust
- By Gavin Barfield, Salesforce
- January 15, 2024
Staggering advances in AI innovation have resulted in a paradigm shift, revolutionizing how businesses operate and interact with customers. Amidst this rapid growth in the potential and accessibility of AI, concerns and questions have also emerged about the data used to deliver on the very promises of these innovations. Headlines about security breaches, inappropriate surveillance, and misuse of personal data, have left society conflicted on the trustworthiness of such technology. Coupled with growing concerns around misinformation bias, and a lack of transparency, AI has increasingly placed trust in the spotlight.
Customers want to know where, how, and what data is being used, and be reassured that their data is adequately secured and protected. They also want to be assured that generative AI is being used ethically and responsibly.
As the adage goes: “Trust is hard to gain and easy to lose.” We are now at an important juncture to explore a paramount question: how can businesses build trust with their customers, employees, partners and investors amidst rapid innovation?
As custodians of customer data for over 24 years, Salesforce has always held trust as our highest value. As we move into this new era of AI, we must relook our approach from two main perspectives: embedding trusted technology and building a culture of trust amongst employees and stakeholders.
Empowerment with trusted technology
Embedding trusted technology requires three main aspects that work in tandem: ensuring the technology deployed is developed responsibly and ethically, making it free of bias and toxicity, and built upon a strong data foundation.
The first two pillars go hand in hand and are critical to deploying AI effectively. Any AI tool that has been developed with ethical frameworks and guardrails in mind should be free of bias, toxicity, and harmful outputs. This builds consumers’ confidence and trust in the AI tools being used.
But this requires a strong data foundation. After all, AI is only as good as the data it’s trained on. If the data that generative AI models are grounded in are biased, inaccurate, or incomplete, outputs will naturally reflect these flaws — with potentially dangerous consequences beyond just propagating existing biases.
As the bulk of data that organisations use belongs to their customers, they need to respect data sources and ensure they have their customers’ consent. For this reason, we developed Einstein Trust Layer, a new industry standard for trusted enterprise AI that ensures customers’ critical data remains just that - theirs, rather than ours to use.
With secure data retrieval and sensitive data masking, customers can reap the benefits of generative AI while maintaining privacy, security, and data governance controls.
Ultimately, investing in ethically and responsibly developed tools will give both employees and customers the confidence and capabilities needed to optimize the potential of the technology used, and helps anticipate and mitigate risks from the get-go.
Tackling the trust gap starts from within
Building a culture of trust among employees and stakeholders forms an equally important part of the equation. Businesses may already be implementing or even developing new technologies, but is the workforce ready to use them effectively and safely?
Recent research with YouGov found that Singapore workers are using generative AI at work, albeit with a limited understanding of AI ethics and safety, leading to questionable generative AI use. Among Singapore workers who are already using and experimenting with generative AI, 76% are presenting AI’s work as their own, and over 90% were not able to identify all the actions associated with using AI ethically or safely.
Instilling trust goes far beyond just adopting the newest and safest technologies. These will still require constant oversight and the skillsets to utilize them effectively. 63% of Singapore workers have not received training on how to use generative AI ethically and safely, while 78% say their companies do not have clearly defined policies on how generative AI can be used for work. Companies will need to listen to their stakeholders and develop clear, actionable frameworks, to provide targeted training and tools for employees to use AI responsibly and ethically. This is imperative in instilling confidence and trust within employees and will also have a ripple effect that trickles down to customers, regulators, and other important stakeholders.
Additionally, to foster trust on a broader scale, the value of wide-scale cooperation is undeniable. Engaging other industry leaders, regulators and stakeholders in the ecosystem will be crucial to advancing responsible AI public policies that have far-reaching impacts on trust levels. One such example is the AI Verify Foundation launched earlier this year, which aims to support the development of AI testing frameworks, standards and best practices.
While we are just at the beginning of the generative AI journey, it is clear that trust is a crucial aspect we must not overlook. Every aspect of a company that is disrupted by technology represents an opportunity to gain or lose trust with customers, employees, investors, or regulators. Therefore, leaders must embed values and principles that build a foundation of trust across their organization’s culture, technology, people, and processes. Only then will we be able to harness the power of innovation and set ourselves up for long-term success.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image Credit: iStockphoto/melitas