This is really difficult for me to say since optimism about technology making our future better is what has kept me going all my adult life. It’s why after a degree in mechanical engineering and an MBA , I chose to be a programmer . It’s also why despite multiple leadership opportunities on sales and general management, I continue to be a hands on technologist .
It’s not that I have become pessimistic suddenly about technology’s power to transform society today – nothing could be farther from truth . It’s just that I have a lot more pessimism about the humans who use and control the technologies that will impact us .
I have been quite an active participant on social media – especially thanks to the easy access via my iPhone . Between Twitter , LinkedIn and Facebook – I have more than twenty three thousand or so connections (including some duplication for sure). On twitter – I only follow about 150 people, mostly because I can’t keep up with a larger feed . I strongly believed that this network has given me mostly net goodness.
I have thought a lot about what is my primary principle for social media . I think the honest answer is convenience !
I do 90% or more of all my social media activities on the apps on my phone . At some point, I started accepting vast majority of connection requests without too much due diligence – clearly not a smart idea and I am slowly cleaning it up now. I haven’t fiddled with ALL the privacy controls on each platform . It’s not that I was fully ignorant of what these platforms did with my data – just that I didn’t think of it more than as a nuisance with a bunch of merchants trying to sell me stuff non stop . I have often discussed with friends from my line of work how some of these targeting algorithms could be optimized to make it less annoying .
Then this Cambridge Analytica thing came out ( and the continuing conversations about the Russian influence on elections) , and yesterday night I read Zuckerberg’s response on Facebook . It’s extremely depressing to say the least .
The irony is that yesterday night is when I reinstalled the FB app back on my phone after a month away from it – and the first thing I noticed was Mark Z response ! I did go and tighten privacy controls as soon as I read it !
I work in analytics and AI – and have a special interest in getting insights from unstructured data . That means I do know how easy it is for FB and others to gain a very deep level of understanding of our lives . I also don’t think that privacy controls by themselves are of significant benefit . When you have a lot of data from a lot of people – you don’t need every last bit from every individual to get the deep insights. I will spare the tech aspects here – but suffice to say , these platforms have disproportionate power even if we assume they are all angels . We also know by now that they and the people they give access to our data are not exactly angel like .
I do value the ability to stay connected with friends and family . I also enjoy the vacation pictures and puppy videos . So the only solution I can think of is to significantly reduce what I discuss on FB etc. I didn’t miss Facebook when I stayed away for a month . So I also wonder if I could just get out of it for good and be done with it . I know I am not alone on these thoughts .
There is an interesting cross cultural aspect to consider too . I have spent a lot of time in Europe thanks to my work . There is no comparison between US and EU when it comes to privacy . If I lived in Europe for longer , I seriously wonder if I would have traded privacy for convenience . Plus the government wouldn’t have allowed a lot of what FB etc has gotten away with in US . Given its global reach , I do expect FB to get hauled up in EU at some point soon .
Then there was the poor woman cyclist who was killed by an autonomous Uber car in Tempe , AZ . It’s not very far from where I live – so this hit home harder than usual . Tempe police has released a preliminary report and video (Its disturbing – so not linking it here) . I really wish the lady was way more careful about crossing the road at night . Such a tragic end ! I am not at all a legal expert – but it’s quite possible in my view that law might blame the lady and not hold Uber responsible for this accident .
I have a big interest in the topic of man and machine working together, and have written and spoken about it a lot . A critical question here is whether a machine should be held to a significantly higher standard than a human in similar situation . Several of my friends think a machine should be held only to the same standards as humans.
For at least two reasons , I actually think machines should be held to significantly higher standards than humans
1. A machine is more efficient than humans and can keep getting even more efficient in lesser time than humans by comparison . So the flaws in those machines are also amplified several fold more thanks to mass production of machines . We can’t risk the world being full of half baked machines , irrespective of benefits in cost and convenience . No price is too high when it comes to protecting human life
2. A Machine can make faster decisions than humans and use more sources of information than a human can to make those decisions . At the same poor visibility , a human driver probably will have made the same mistake the autonomous car did – and that’s a fair argument. But vision is not the only sensory option for the car – motion detection , heat detection etc are all options and there are plenty of sensors/actuators/radar/lidar on such cars. And the cost is also declining pretty fast. So I think it’s a false equivalency to say a human driver would have made the same error and hence the machine should get a pass .
And in the video – it looks like the driver sitting there didn’t notice anything till last second , arguably because of the trust in the machine to do a good job . This trust is what worries me . In the situation where there is a passenger in front of car is straightforward – the car should break . It could get much worse in cases where the decision is a choice between two bad options like hitting one person or hitting another via swerving. If the straightforward option itself is not reliable , how would we expect the machine to react in more complex situations ?
I think Uber did the responsible thing by pulling the self driving cars off the street . They are also apparently fully cooperating with the investigation . I also think AZ authorities are correct in not making any snap judgements on tightening regulations.
This should wake us all up – testing autonomous systems is quite hard to begin with . And it needs a lot of inter disciplinary research investment to get better and more consistent . We are not exactly short on money or talent to get it done – we just need to put safety as a bigger priority than it is now . I love capitalism as much as the next person – but commercial greed just cannot be allowed to over rule safety under the branding of capitalism
I absolutely think our future is still about technology doing good things to improve our quality of life, including social media and self driving vehicles . But it’s high time we took a long and hard look at what are the top priorities in our quest to get there . Better , faster , cheaper is not enough – we need to add SAFER as a first rate citizen into the value proposition and it should not be negotiable !
2 thoughts on “The future isn’t all what it used to be anymore”
Reblogged this on Sensemaking Currency and commented:
The Insurance Institute for Highway Safety said that human drivers create 1.16 deaths per 100 million miles traveled, per their 2016 data. Already we have at least two from AI cars, both of which were speeding (Tesla driver, and this one), and no where near 100 million miles driven. Using the metric of “miles driven to how many accidents”, Google’s self-driving car report to the government showed them to be horribly bad drivers, far worse than humans. And now, we know that they are killing humans at 10 times the rate of human drivers, or worse. Both cars were speeding and the first thing people claim is that the AI was not to blame. So many people are not thinking about a world where the assumption is that the vehicle can kill anyone who gets in it’s way, without responsibility on the vehicle. The shiny aspect of these vehicles has people blinded to the truth.
Amen. The Insurance Institute for Highway Safety said that human drivers create 1.16 deaths per 100 million miles traveled, per their 2016 data. Already we have at least two from AI cars, both of which were speeding (Tesla driver, and this one), and no where near 100 million miles driven. Using the metric of “miles driven to how many accidents”, Google’s self-driving car report to the government showed them to be horribly bad drivers, far worse than humans. And now, we know that they are killing humans at 10 times the rate of human drivers, or worse. Both cars were speeding and the first thing people claim is that the AI was not to blame. So many people are not thinking about a world the assumption is that the vehicle can kill anyone who gets in it’s way. The shiny aspect of these vehicles has people blinded to the truth.