Looking ahead – what jobs will technology take away ?


b8VJ4I6wR0yEt8Lxn1M3hg

As with a lot of things like politics, religion and so on – the world is sharply divided between people who believe AI and Robots (or automation in general) will take more jobs away than it creates.  I was drawn into this debate yet again by a few friends couple of weeks ago – so let me jot down while I still have it fresh in my mind. My crystal ball is not any more effective than yours when it comes to looking into the future – but there are a few scenarios where I do think jobs will be taken away. If your job is in one of these categories, the smart thing to do is to gain additional skills. Just to be clear – I also think there won’t be any net job losses. As always – this is all strictly my personal views on the topic, and not that of my employer.

Another way to look at this is – many companies will automate tasks and eliminate labor where they can to save costs. If you have skills that can make them earn more revenue directly or indirectly, you get to stay employed. Otherwise instead of reinvesting the savings, the company will probably treat it as profit, or keep the cash for future. The job itself might stay in many cases – but there just won’t be a need for as many people to do it. Granted – there will always be exceptions. Technology will also create a bunch of new jobs – which I will write about in another post.

I think there are at least four categories of jobs that will get disrupted soon.

1.If most of what you know is public knowledge

This is especially true for my own profession, which is consulting. In the 90s when I got out of college – there was no google. If I knew something special ( from books , professional magazines, training etc ) – a client would pay some money for me to tell them what I knew that they did not. That does not happen much any more – there is no premium for internet access . Clients and consultants both have access to similar information – so you need to know more than what is available on internet to fetch a premium. It might sound ridiculously obvious – but this is a bigger threat to (especially junior) consultants than almost anything else .

You absolutely need to stay couple of steps ahead of market to add value to a client today. Having a logically defensible point of view, knowing what others in the industry are up to, what disruptions are on the horizon, what untapped opportunities exist are all still things a client will pay a premium for.

Consultants are not the only ones at risk either. As an example – A hotel concierge function could also fall in this category. You don’t need a human to get you a restaurant reservation, check weather, know the local tourist spots and  so on. However, it will be hard to replace a human who can help you score a last minute Hamilton ticket in Broadway, or one who can answer questions from four different customers in parallel and make them all feel special.

2.If your work is all about short tail questions from a customer 

A lot of systems we use were not designed with end users in mind. Thanks to that, a lot of human intervention is still needed for people to use things they bought. A good part of customer service calls are about answering questions like “whats my account balance”, “can you reset my password”, “Can I set up a payment plan”, “Can I use a different credit card” etc. Automation is mature enough already to do those things without human intervention . If that is all your skill is – your job probably will be taken away soon.

But there are lot of things automation cannot do in this scenario  – at least not yet. For example , talking a customer out of canceling a service is not something AI can do very effectively like a trained retention specialist. From a customer’s point of view – an automated way of resetting a password, or making a routine payment would be easier/faster than needing to talk to someone. But when you are upset with poor service , or want to talk through multiple options – there is nothing worse than listening to a machine with a long menu. Also think of this – as tech (and laws) improves all around, in most categories customers will have zero or low switching costs.

So if you are skilled at higher value service – you should be in hot demand. The money an employer saves by automating the short tail responses is what lets them invest more in higher value services. Of course we can also take a cynical view that some companies will just add it to bottom line and not bother re-investing. While that is a short term possibility, I doubt they can do it in long term without risking their whole business.

3.If you are in a job where process trumps thinking 

There are several jobs where the job needs very little original thinking. The critical thinking is done by few people who designed the workflow, and not by people executing. This would include things like preparing fast food, paying invoices, checking totals, scanning documents etc. These jobs are generally at risk given they are easily automated – and probably the only reason they are still around is because of the one time cost of implementing new technology. Given all tech will eventually commoditize, this is only a temporary safety net.

The human intervention will be limited to exception processing in these workflows, especially those that involve safety, brand issues, downtime issues etc like – what if the lettuce delivered is rotten and you need to run to local grocery to buy some ? What happens if the scanner stops working the last day of the fiscal period ? Do you want to harass a customer on collecting $100, when you know in 3 months they are due for a $1000 renewal?

4.If your job is only about answering questions, and not about asking questions

Computers – and all the advances in AI and Quantum computing and whatever comes next – will keep getting better at answering more and more complex questions. There are questions a computer can answer faster and more frequently than humans today – like who was the 44th President of US? What planet is closest to earth in the solar system ? . There are questions that are really hard for computers too, where a human can often answer effortlessly – like who was the quarterback of the super bowl winning team the year the 44th President of US took office ?  But over time, we should expect computers to generally be able to answer most questions we ask.

But humans are way better than computers when it comes to asking questions. At some point, computers probably can interpret a medical image better , and compare it against a million other images faster than any trained human medical expert. However, that is only a starting point – human experts are way better at asking better/unique/complex questions and explore any body of knowledge and expand on it. This is why I think no expert system will eliminate doctors – they will just make the quality of medical service a doctor can provide a lot better, and reduce mistakes. In short – We get to ask the smart questions, and mostly leave finding the answers to machines.

In various forms – this phenomena will play out in every job . People who have access to smart machines that can find better answers get to make decisions faster and cheaper than others, and that is how competitive advantages will be created in the market.

So in a nutshell – differentiation in future will be based on humans who can ask better questions than they can ask today, and machines which can answer better, faster and cheaper than they can today.

Sounds pretty straightforward, but we will of course fight this every step of the way. When horse drawn fire engines were first introduced, humans used to race them on foot to prove their superiority. We know what happened after that. For many reasons – political, legal, social and economic – just because technology can be used to effectively solve a problem does not mean that it will happen fast. So in my view, there is practically very low risk of massive unemployment any time soon. But without a doubt , every job around us will evolve in a way that human value add will become all about asking better questions and technology’s value add will be about giving better answers.

Microservices – What have we learned ?


Yesterday, I shared some of my thoughts on serverless architecture  and ended up getting a lot of feedback and a lot of it went back to SOA and then logically on to Microservices. So I thought it may be worth penning some thoughts on this topic as well. Microservices are not a fad – they get used quite a bit today across big important companies, although perhaps not very consistently. Also, I think we have enough experience by now to calibrate our expectations compared to a few years ago.

pexels-photo-302895.jpeg

What is microservices architecture?

I am sure there is an official definition out there somewhere. But a simple way to describe it is as a way to design a solution as a collection of services, each of which does one thing quite well independently. Each service is its own thing – own process, implementation, language etc . The collection of such services that solve an end-to-end problem is then orchestrated with a lot of automation capabilities to form an application.

Why is this a good thing compared to a “monolith” architecture ?

Separation of concerns is not a new idea in designing systems. Microservices architecture is built on this principle. The traditional way of building an application includes a front end ( html/js) , a database ( SQL/NoSQL/File/…) and App server to handle logic. When we hear people criticizing “monolith” apps – they are usually referring to the serverside logic built as a logical whole.

Monoliths are not bad per se – they can be designed and implemented in a modular way, and can scale with the help of good design using load balancers etc. Just that when it comes to scaling, testing etc – you have to deal with the whole even though only a small part needs to change. As cloud becomes more and more the default deployment option, the flexibility to scale and change quickly becomes a bigger need than in the past. Microservices is a very good way to deal with it. Many monolith systems will co-exist with the world of Microservices .

How micro is a microservice ?

This is one area where the wisdom from actual projects tend to throw cold water on the theory and philosophy of microservices. The tendency for many engineers is to go super granular in service definition . Almost without exception, everyone I know who started with this approach has regretted it and agreed to start with fewer services and then break them into smaller chunks over time. The operational overhead is quite significant as you play with tens of services – you now have to maintain and monitor several services, and at some point there is a performance penalty for too much communication across a bunch of services that all do one little thing.

Another interesting aspect is whether your system needs to behave more in a synchronous fashion or an asynchronous fashion. When you break the system into smaller chunks, essentially you are favoring asynchronous communication between them. Then if you need to make it work in a synchronous fashion – you may question your granularity decision quickly.

What about the product/project team?

I have seen several ways in which teams are organized , and have spoken to folks who worked in such teams where I had no direct involvement. There are a few consistent themes

  1. The need to communicate frequently and freely is a make or break criteria, way more than traditional approaches. With great flexibility comes great responsibility !
  2. One big advantage that comes with microservices is that each service can be implemented with a different fit for purpose language. And each service might choose a different database for persistence. While that is all great in theory, just because you can should not translate to you should. For large projects – too many technology choices leads to diminishing returns. Choose wisely !
  3. There is practically no good way to hand off to an ops team when dev is over. Microservices forces a DevOps culture – or at least DevOps tooling for sure. Its probably a good idea to get EVERYONE in the team some training in tooling. You need different muscles for this world than dealing with a Tomcat cluster. The promise of CI/CD needs a highly trained, high performing team. I may even venture to say that best practice is to have the same team that builds the team to continue to support and enhance systems built on microservices. There are just too many moving parts to effectively transition to a completely new team.
  4. There is no substitute for experience. There are not enough highly skilled folks around , so the ones you get need to carry the weight of mentoring their less experienced colleagues. Written standards might not be enough to overcome this. A common observation is two services looking at the same business object – like a vendor object that is of interest to an accounts payables service and a compliance service – and interpreting the semantics differently. Only with experience can you catch this early and converge.

Is it truly easy to make changes compared to monoliths ?

If you are a microservices fanatic, you probably are well versed in all backward compatibility tips and tricks, and hence your answer has to be YES. I will just say that there are some cases where you wish you were working in a Monolith, especially when faced with pressing timelines. A good example is the changes many apps will need due to GDPR  . When multiple services need new functionality – you need to wrestle with the best approach to get this done. Would you create a new service that others can call ? Maybe a common library? Maybe change each service and make local changes? Each has obvious penalties. No silver bullets – decisions taken in designing the app will dictate whether you buy aspirin from Walgreens sized box or Costco sized box 🙂

What about monitoring, testing, debugging etc ?

All the overheads on these topics that comes from distributed computing are in full force here. This is one area where the difference is significantly more noticeable than in the monolith world. Many of us are fans of doing Canary releases . You should have some consistent philosophy agreed on upfront for release management. Whether we admit it explicitly or not, lean and fast deployment has a tradeoff with testing effectiveness. Essentially you are relying more on your ability of montiring your app ( via all the services and messaging frameworks and redundancies) and making quick changes vs trusting impeccable test results . This is a significant change management issue for most technology teams and especially their managers.

So is microservices architecture a safe bet for future ?

There are plenty of public success stories today of microservices implementations – and conferences and tech magazine articles and youtube videos and all that. All Major SIs have expertise in implementing them. So in general, I think the answer is YES. However, I am not sure if microservices over time will be any less frustrating than monoliths in how they evolve. I probably will get some heat from my purist friends for saying this – but perhaps one way to smoothen the journey is to start with a monolith as we have done in the past, then as it evolves – perhaps have services that call the monolith’s APIs. And as you learn more, break down the monolith to a full set of services. I am not saying this because I am a non believer – I am basing it strictly on the talent available to do full justice to a pure microservices based architecture in a mainstream way. Just because Netflix did it does not mean everyone can. In any case – the mainstream pattern in large companies any ways is to start with their old monoliths and roughly follow the approach I mentioned.

Is Serverless for you ?


One of the more recent architecture choices we can play with is the idea of serverless aka FaaS (Function as a Service). Thankfully, it is not hyped like say Machine Learning is. But nevertheless, it is widely misunderstood – often leading to bad design choices. I am just going to list a few questions I have been asked often (or I have asked fellow techies) , and give my point of view on those. I will try to keep this at a level where it makes sense for people who are not full time technologists .

pexels-photo-132037.jpeg

Are there really no servers?

To begin with the name serverless itself is quite misleading. Your code does not execute in vapor – it still needs servers. From a developer point of view – you choose a provider to handle a lot of things servers do (but not everything) , and you can focus on your application tasks. That is not the same as there being no servers. Its one of those things where my developer friends smile and wink, and my Ops friends roll their eyes 🙂

Is it really much simpler than other options ?

A very hard question to answer with a YES or NO. If we look back 10 years or so, it was all about service oriented architecture (SOA). Now think how many well designed services were created in the time since then ? I personally have seen way more badly designed/implemented services than good ones. My point is when you try to deconstruct an existing application into smaller ones – it often (not always) becomes more complex, not more simple. I know it is counter intuitive till you think through it, or work on an actual project. The simplicity argument is strongest in favor of FaaS when you eliminate server management from what a developer has to worry about – but even there, you need to be careful about where the server logic goes. Sometimes you implement it in client, sometimes you move it to the function, and some times you need dirty hacks to keep everything working. Simplicity is in the eye of the beholder.

It is cheaper ?

When used for the right scenarios, it is indeed cheaper. The obvious case is bursting – where once in a while you get a lot of traffic to handle. If you are not careful about designing – and especially if you don’t test with production level data, its quite possible that you may end up with a more expensive solution than having to deal with full time server management. That is hardly unique for serverless though. Poor choices have a price to pay now and/or in future.

What does the developer not have to worry about ?

For the code implemented as a function, the provider ( like AWS Lambda) takes care of making sure it will get executed when triggered. So things like horizontal scaling, fault tolerance and high availability are things you don’t need to worry about. Needless to say, you have to make sure your code is friendly for parallel execution. And it is still on your plate how the application as a whole works – Lambda etc only controls what you passed on to it. So developers can enjoy more breaks ( or officially , they can work on making their app so much better ) 🙂

Also, if your server code is in java – which is true for many cases – you save a lot of time because you don’t have to redo the whole code. You can lift and shift with low effort. Another thing to gain more coffee breaks !

Also, API gateways become a good friend for developers in mapping parameters to and from functions. This makes development and maintenance more efficient in many cases. API Gateways themselves are fairly new too – so there is that. I guess you can also consider authentication to be implemented via the gateway – but my security expert friends may not like this idea. I need to think through this one more.

What is the big difficulty for developers in FaaS world ?

If you are like me, you spend more time debugging than actually writing code. As with all distributed computing scenarios – you have a tradeoff to make here. As we introduced these paradigms in quick order, the monitoring and debugging tooling has not kept pace. So the first movers typically spend more time debugging across all the layers without useful tools and it can be quite frustrating and inefficient.

How about testing ?

Since computing is distributed, you should plan for higher quality of testing in general. Its also about setting expectations with testers and users. Since the server is not always on waiting for a request – it needs to be switched on every time, and then it stays on only for a few minutes. If you have a lot of near real time needs, implementing it as FaaS is perhaps not the first option to cross your mind. Also – a lot of dirty hacks get introduced while testing if you are not careful with designing. A common one is to keep pinging the service to keep it awake since you realized some tasks take longer. You really need a close approximation of production peaks and valleys in testing to make sure you don’t get a midnight call to debug.

isn’t FaaS stateless ?

Short answer is yes , of course. But often we need some hack to hold state – usually by using a cache or database . Some logic on session management could be in client side too.

Is AWS Lambda the only option ?

Lambda is definitely the most popular and have been in the market the longest. But a lot of big players like IBM (OpenWhisk), Miscrosoft ( Azure functions) and Google ( Google cloud functions) . So you do have choices – they all have different things they support, but probably will converge over time. I will resist the temptation to talk about standardization 🙂

So what is a good place to start ?

Serverless is a newbie in the world of architecture – so proceed with sufficient caution. Since my playground is in large enterprise space, what I have seen the most is large existing apps offloading small parts of their functionality to functions. Those companies who have embraced DevOps also consider serverless when they create new apps . At the moment, I don’t expect to see a lot of pure serverless architecture options in large enterprises. Some kind of hybrid approach is probably where we are headed. Once the tooling gets strong, I am sure we will see definite patterns emerge.