My guess is as good as yours on how AI will influence the world around us in the long term. All possibilities from solving world hunger to we becoming Cyborgs remain on the table. However, I do have some thoughts on where this field is headed in next few years – say the 5 to 10 year window. As we wind down 2017, I thought I will share four of my thoughts on this topic – enjoy it with your favorite holiday beverage 🙂
Invisible AI vs AI face-offs every second
Several big companies have invested in personal assistants powered by AI – with varying technology maturity. Some of the hottest startups across the world are working on giving the big companies a run for their money. Given I am not an impartial observer given my day job, I will resist the temptation to predict a winner. More and more money and time is being poured into this category across the industry.
Personal assistants will be a part of everything we routinely do going forward. And the change it will bring profound disruption all around us. For example – we might outsource our routine grocery purchases to a personal assistant. The logical next step is for grocers to stop sending us coupons and loyalty cards, and instead market to our personal assistants. That does not need TV advertisements or glossy news paper inserts or targeted emails. That communication is on some machine readable form – say JSON or perhaps even binary. The best Ad agencies will need to hire best AI experts, not (or at least not just) the best creatives.
When customers outsource buying to personal assistants, vendors will need to resort to AI to respond as well. And working backwards, the entire supply chain will need to get redesigned to be a lot smarter. Both consumer and enterprise tech will face a fast faced evolution.
The interesting aspect will be that most of this profound disruption will happen behind the scenes without us realizing what is happening – there will be an AI vs AI face-off (hopefully of a good kind) every second as we go on with our lives blissfully unaware.
AI Safety frameworks will emerge
As Spiderman said “With great power comes great responsibility”.
This takes many different forms in the field of AI, and might never get fully solved even in long term. Also, vast majority of these smart applications are probably never going to be actually harmful, even though they may become super annoying ( I am looking at you , Linkedin algorithms favoring double spaced god awful posts.) .
- Since fast iteration is the norm for most AI initiatives, it is important that we have some way of proving safety of such applications mathematically before deploying in production. Today, we can already prove somethings – but this needs to improve big time and become mainstream.
- We need consistent hardware level protections. Most interactions generating data for AI to work on will be between machines , or between machines and humans. Software cannot be the lone line of defense – hardware level security should become a given. That will need a lot of standardization, which is not a term our industry particularly likes.
- Ethics and law needs to be taught to all AI practitioners, and need to become part of the curriculum in early education. Awareness of the distinction between good and bad usage of AI need to become a minimum requirement.
AI Project team structures will change
Math and coding skills are not enough to do AI projects well. This is not new – we have already known this for a while. I think we will start seeing teams organized in four overlapping specialized groups going forward.
Most AI roughly follow the same cyclical sequence .
- Understand ( language, sound, vision, smell, touch etc) and Organize information from the environment
- Reason using math and logic , and make trade-offs and come to decisions
- Interact with humans and/or machines to convey decisions (UI, psychology, visualization etc) and collect feedback
- Learn from results on how the decisions worked and tweak how the problem will be solved from now on
Today, we try to do all of this with very little specialization – except perhaps in the math/logic side, and industry domain knowledge. But that won’t sustain going forward to cope with the scale – each area will need specialization and a lot of collaboration with each other.
Academia and Industry will become indistinguishable
There are two things an AI team needs to stay cutting edge – quality of AI talent , and quality/quantity of data available to make the solutions smarter. No surprise then that most of my time gets spent finding and retaining such talent. This is true for all my peers across the industry too . If there is one set of people who are under even more stress than us in the industry – that would be the leaders of top universities. Industry and Academia has generally had a good working relationship historically , but the war for AI talent has Industry aggressively poaching AI talent. This might be great for the short term – but absolutely horrible for the long term. Who will teach the next generation if industry keeps poaching the best teachers and researchers ?
Academia has started great initiatives to let professors go to industry and come back – but not in a mainstream way as far as I can tell. And industry has not – in a mainstream way – gotten into the habit of thinking of academia beyond special projects and consulting. This is not an AI specific problem – AI will just make it painful enough for both sides to get to a solution quickly. I think what will happen is a de-facto working arrangement where there is little to no difference between academia and industry in the field of AI with experts just wearing different hats as needed.
Happy holidays !
Another key facet is how employees will interact with AI. Jobs and business processes will change. Employee expectations from AI solutions will be both high and sometimes suspect. The consumer market is setting these expectations and businesses will have to step up.
LikeLike