Two weeks ago, I saw a man wearing a very unusual hat, and ever since then, I have been reminded of him almost every time I go to a new website or check my social media.
I was in an airport when I saw him, and he was boarding the same plane. I hadn’t seen a hat quite like that before and made a comment to my wife about it, and we both had a bit of a laugh.
I thought no more about it until I fired up my computer to do some work when we arrived home.
All over just about every page was advertising for the "genuine leather beret newsboy cap," the very same as worn by my co-passenger only hours ago. This was despite me only vaguely describing the hat – having never seen one before I had no idea of what to call it.
Despite assiduously not clicking on any of these ads for fear of encouraging them, advertisements for the hat have spread across my internet usage like a virus. Even my wife is now getting them on her computer.
There are several things about this. The first is that while the hat looked very stylish on my fellow passenger, that sort of thing really is not for me.
The second is the can of worms this opens about online browsing private and eavesdropping.
On further research, I discovered evidence that any claims that providers are listening in on private conversations via smartphones is an urban myth, despite the uncanny synchronicity.
Mobile security provider Wandera recently did a series of experiments that proved that this was not the case and that smartphone eavesdropping is not a thing.
I must admit I am still skeptical of this, but parking that to one side, there are several broader issues.
The digital world is full of stakeholders, and I fully understand the retail industry’s ambition to pursue the idea of tailored marketing.
It makes sense for retailers to more accurately market products, which data analysis says will be of more interest to particular consumers. In the digital age, this is smart marketing and proper use of the new technology.
As a consumer, I might also appreciate being sent relevant offers and details about the types of new products I am interested in.
Take this a little further in the new era of open data in the financial industry, for example, and some consumers could benefit from product offers priced to reflect their higher level of creditworthiness.
All this is counterbalanced by the need for privacy. In my case, I still don't know how the digital eavesdropping was done, but I assume it was through Siri on my iPhone.
All I was doing was chatting about a hat. Still, a whistleblower in the U.K. recently cited how some Apple contractors "regularly hear confidential medical information, drug deals and recordings of couples having sex" as part of their job of providing quality control or "grading' Siri and improving its performance.
At that time, Apple told the Guardian that a “small portion” of Siri requests are analyzed to improve Siri and her performance.
Apple has reportedly suspended its reviews of Siri voice recordings by human contractors, and apparently, users will be able to opt-out in a future software update. And of course, you can stop this eavesdropping by going into 'settings,' but that will disable Siri.
Not surprising, Apple is not the only company doing this. Google admits that its contractors listen to the recordings made by Google Assistant and the home speakers, which it powers. 'Human Review' was reportedly suspended for three months back in September.
Amazon's Alexa also collects and shares bits of audio, but now has a setting where people can opt-out.
Over at Microsoft, Cortana has been exposed to listening to users' voice commands, in addition to translated Skype calls. The same has reportedly occurred through the Microsoft Xbox gaming console.
All of this might sound insidious, and it probably is. Leaving your device settings in a state where this can happen is about the same as not reading the terms and conditions for just about anything, and we know how many people do that.
There are alternate uses, however, which show that such ‘eavesdropping’ might not always be wrong. Just like biometrics can drive convenience, such as the Qantas airways rapid boarding trail, the same technology can be used for repression, such as China’s controversial ‘social credit’ scheme.
In this case, Amazon workers in India and Romania are reportedly using clips from Cloud Cam and training algorithms to differentiate between real land fake home threats, such as home invaders or a dog wandering in.
Users could share specific clips with Amazon and can delete them if they chose to. The deliverable is improvements to Cloud Cam, which is integrated with Alexa and allows users to monitor what is happening at home via an app.
Catching Criminals by Voice
In the U.K., the National Crime Agency is reportedly working on a database of voice recordings to help the apprehension of criminals, developing an audio library that can recognize suspects from their “voiceprints.”
The project is using several sources for the audio, but one source for this “voice analytics” library is from Alexa and its home speakers.
A default ethics position would ordinarily be around consent, approval, and anonymity, but in the case of potential criminality, is that waived, and should it be?
Clearly, the digital world is creating all kinds of opportunities for technology providers to access people’s private worlds.
What is apparent is that the ethics, governance, regulation, and practice are lagging behind the capabilities of the technology.
Most people are also confused about what is going on, so the issue oscillates between an urban myth and a malevolent conspiracy theory.
None of this is good for our digital health, and if nothing is resolved, the issue is only going to get more toxic.
And finally, nothing will ever persuade me to buy that hat.