iOS 11 tip: Enjoy a web focussed on content by making reader mode in Safari opt-out rather than opt-in. Long press the reader icon & select “use on all sites”.
Enjoying the challenge to easily post from my iPad to my microblog that’s backed by Jekyll.
A self-hosted microblog. Hooray.
We want you to be able to tell Google: maybe the last four hours, just take it off and go off the record. […] you can switch on in-cognito mode. […] I want to save every conversation that I have with my daughter for eternity […]; but some other converations, […] maybe with my general council at Google, I want to be private.
Google has a binary view on privacy. Things are either on the record or off the record—with the default being the former.
For things that are “on the record”, Google’s terms of service they are very explicit about what they can do with it. Namely:
- You grant them “a worldwide license to use, host, store, reproduce, modify, create derivative works […], communicate, publish, publicly perform, publicly display and distribute such content”,
- and, of course, to “analyze your content (including emails) to provide you personally relevant product features, such as customized search results, tailored advertising […]”,
- and lastly, “this license continues even if you stop using our Services”. 2
But, to many people, privacy isn’t that simple and not binary. Consider the following examples:
- You might keep a journal, which you want to have accessible even many years later.
- Private conversations that you have with your child.
Would you want those things to be “on the record”? While many people trust Google with that data, as Pichai points, it is questionable whether they are fully aware of who is getting what kind of license and access to their content, when they are using their phone or computer.
For people who do not want these things on the record, while still getting the benefit of them being safely backed up and synchronised between multiple devices, Google provides no help; there is no option in between that lets you get benefits of using cloud services but without granting all these rights to Google.
If Google wanted to truly get better at privacy, they would do the following:
- Make content private by default rather than “on the record” by default.
- Enable end-to-end encryption by default where possible when sharing data between users.
As hinted at in the interview, Google wants to tackle the second point by using machine learning to infer defaults better than using their currently manual heuristics; that’s a good start. They also should do more on email encryption, and they should enable end-to-end encryption by default on their new Allo app—that would bring them on par with iMessage, Whatsapp and, soon, Facebook Messenger. The big one though is the first point; it is possible as Apple demonstrates 3, but it is a shame that Google’s business model gives them limited incentive to follow suit.
Longer transcript of what Sundar Pichai said (slightly paraphrased by me for readability):
For me: The onus is on us to give enough value that people trust us. Privacy is something that machine learning and AI at Google will help us to do better. Lots of times, it is hard to do privacy because we rely on manual heuristics and how to go to give you manual controls and settings to do these things. But we do these better. Very soon you will be able to give your name to Google and we’ll pop up your My Account settings and control all of that. About a billion people went through these settings in the last year alone. But all the time we want to get even better, we want you to be able to tell Google: maybe the last four hours, just take it off and go off the record. We can do these kind of things. When you use Chrome, you can use it any way you want, you can switch on in-cognito mode, if you want to; we are doing it the same with the messaging product. We give users choice. All the time, we get smarter to give users sophisticated privacy controls. You know, I want to save every conversation that I have with my daughter for eternity, and because I want to be able to go back, look back and et cetera; but some other conversations, I want to, maybe with my general council at Google, I want to be private. I want to be able to do those things and we want to be smart about it all those times.
When you upload, submit, store, send or receive content to or through our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works […], communicate, publish, publicly perform, publicly display and distribute such content. The rights you grant in this license are for the limited purpose of operating, promoting, and improving our Services, and to develop new ones. This license continues even if you stop using our Services […] Our automated systems analyze your content (including emails) to provide you personally relevant product features, such as customized search results, tailored advertising, and spam and malware detection.
Licensing terms for Content in Apple’s iCloud terms (highlight mine):
[…] by submitting or posting such Content on areas of the Service that are accessible by the public or other users with whom you consent to share such Content, you grant Apple a worldwide, royalty-free, non-exclusive license to use, distribute, reproduce, modify, adapt, publish, translate, publicly perform and publicly display such Content on the Service solely for the purpose for which such Content was submitted or made available, without any compensation or obligation to you.
- Tech Note: Lessons from Converting Complex Submodules to Cocoapods
Facial recognition of anyone has gone mainstream in Russia 1, reports The Guardian. The FindFace app feeds all the data from a Russian social networking site through an advanced facial recognition algorithm and let’s anyone quickly and easily identify anyone else 2. A big, big loss for privacy.
The founders see themselves just as a wheel in the ever-turning enhancement of technology:
But Kabakov said, as a philosophy graduate, he believes we cannot stop technological progress so must work with it and make sure it stays open and transparent. […]. A person should understand that in the modern world he is under the spotlight of technology. You just have to live with that.”
That’s a convenient view to avoid any moral questions about your work; it also ignores the fact, that, ultimately, technology does not create itself, but people are in charge of creating and adapting technoloy, and steering the economic, political and societal systems.
Confirming that large-scale facial recognition indeed isn’t far from facial detection. ↩
With a 70% success rate. ↩
In the past there has been a lot of debate about the potentially kill-all-humans dangers of Artificial Intelligence. Sparked by Nick Bostrom’s book Superintelligence, which provides an concerning and detailed description, the likes of Elon Musk, Stephen Hawking, Steve Wozniak and many AI researchers have voiced their concerns — for an easily accessible overview, I recommend Wait But Why’s two-parter.
Is it all doom ahead? Ben Goertzel, who is actively working on creating a super-intelligence with his OpenCog project, is arguing otherwise. His response is a long, well-written read, and worth it to get an understanding of the side of AI optimists.
The main arguments that stood out to me were:
- A super-intelligence will likely continually re-evaluate and re-adjust its goals, so the initial goal is not nearly as critical and likely to cause doom as Bostrom makes it appear. Also, he rejects that intelligence and goals are orthogonal, i.e., he argues that it seems highly unlikely for a super-intelligent system to adopt and stick with a “stupid” goal such as filling the universe with paper clips.
- Utility maximisation is overly simplistic and unlikely to work well for a super-intelligence. Compare to how little human goals and behaviours align with utility theory, even when looking at it just in economic terms, and how poorly Utilitarianism works as an ethical framework to decide what’s right.
- It could be less of a case of us-versus-them when humans create a super-intelligence and rather a convergence of the two, where humans create the super-intelligence in a way that benefits them and where it’s not clearly distinguishable what is human and what is the super-intelligence.
1 and 2 are reasonable points, and 3 showcases the potential benefits (which Bostrom also acknowledges but doesn’t focus on).
I find Goertzel’s view particularly interesting as he takes Bostrom’s feedback less philosophical and looks at it more in a practical way. Goertzel is actively working on his approach to a super-intelligence and he’s optimistic that it’s just a decade away. As such, he’s arguing for his open approach which is not compatible with Bostrom’s recommendation that work on creating a super-intelligence shouldn’t be done in the open, should be regulated and ideally be done by a small, isolated group of selected scientists.
Graham Lee (via Ole Begemann):
All of this represented a helpfulness and humility on the part of the applications makers: we do not know everything you want to do. We do know some things you might want to do: we’ll let you combine them and mash them up – “rip, mix and burn” as they used to say – making you more satisfied and our stuff more useful.
And in a follow-up on the paradox of scripting:
The message given off by the state of scripting is that scripting is programming, programming is a specialist pursuit, therefore regular folk should not be shown scripting nor given access to its power. They should rely on the technomages to magnanimously grant them the benefits of computing.
Letting users combine individual applications and web services can unlock the power of computers as a bicycle for the minds. Imagine the impact this could have if it was intuitive for a larger audience than just programmers1.
- Tech Note: Upload Rejected Due to CFBundleResourceSpecification