Project management is hard as it is . There are three aspects of project management that I think need some extra attention when the project is about AI . Pro tip – if it’s your first time , consider buying a Costco membership for the BIG aspirin containers 🙂
The three aspects ( certainly not the only three) are
1. Estimation
2. Change management
3. Testing
It’s the very nature of AI based development that makes it harder than traditional project management . Let’s start by making a few assumptions
Good engineers who can work well with AI are hard to find – but for this discussion I am going to assume skills are not an issue and somehow the PM formed a team with all the right skills.
Since AI functionality is exposed as APIs in many cases , I will also make another “simplistic” assumption to make my point about PM that engineers don’t need to be big time mathematicians and statiticians to use the available APIs .
Just so we are clear – you do need great skills , and you do need some experience with the math to do AI projects well . I am just making these assumptions to keep the focus on PM in this post . For good measure – I am going to use waterfall language here, but there is no difference if the project uses Agile
Let’s start with estimation . The first step of the process is to define phases like say development and testing , and use some historical data to figure how much effort and time it takes . This is easy for many parts of development like web/mobile/DB etc since there are plenty of past projects to give realistic guidelines . Then we get to the AI part – let’s say just one model is in scope that uses Machine learning .
Now we run into the first issue – whether the model we chose is the right one for the app we are building . In many cases – we won’t know till we actually test it . And even if it works ok , it might not be valuable enough of an insight that it throws back to the app . So whatever we estimate – there is a good chance that while developing we will need to switch drastically to something else and that will invalidate prior estimates . This is on top of all the other things that can go wrong in the actual design of the app .
Even if the model technically works – performance can become a dog . Not all models support parallel processing . In many cases , another statistical method might need to be considered after you realize performance is going to be a problem . Ergo – potential to rework is lurking throughout .
Then there is the second issue – which is that unlike traditional projects , source data can fundamentally change the fate of an AI project . Data is already a problem in traditional projects – unclean data can increase ETL effort for example . But it’s kind of easy to estimate ETL effort after you profile source data . But even if data is technically clean – your model may not find it useful . You probably will miss data you need , or need data from additional sources and so on . And there is no good way to know this prior to putting the data through your model . It’s also not uncommon for your base application to change its paramarters significantly because your model needs different information eventually than what you originally designed .
In other words – your dev and testing team can drive you nuts if you walk in as PM with “traditional” expectations. Now to be fair – it’s not because your engineers are willfully . They just have a harder time debugging when things go sideways – and a lot of things go sideways . The Costco membership helps with less expensive beer over weekends where you reflect your career choices 🙂
These two issues already make it hard to estimate and test an AI project . Now let’s look briefly at the change management aspect .
Explaining what an ML model does in business terms is a non trivial challenge to begin with . Now to explain why the model doesn’t work as planned and why it needs to be reworked – without guarantees it will work the next time – makes it a much harder problem . Even when things work well – and if a business person asked you to explain why the model arrived at the result it did , its often hard to explain .
Good PMs educate their stake holders – AI projects can make them look like a tenured college professor 🙂
So how does a PM mitigate these issues ? I will offer a few thoughts
1. Start by educating yourself on fundamentals of AI . You can even create little chatbots etc with minimal to no coding . Get a feel for the actual work upfront before you take over the project . Make sure all your non technical people like Business Analysts do this too
2. Set the expectation with the client on the nature of the AI projects before the project starts . You can do this with real examples using simple models that you can mock up in Excel and then help the client extrapolate what will happen if something goes wrong
3. Insist on highest quality engineers . If you have to develop something from scratch today – developers need experience with multiple frameworks and know their trade offs . AI models add a layer of complexity on top and have their own trade offs . Having past experience won’t eliminate all the issues I called out – but it will help minimize the mistakes , and when things go wrong you will recover faster .
4. Aim small , Miss small . Try hard to resist the temptation of trying to build a complex system in one go. Make small goals and course correct as you go .
5. Double down on data quality . Encourage your ETL team and data scientists to keep looking at data , visualize them differently and identify problems as early as possible .
6. Create a support system for you and your team . It’s a new field and everyone needs help from time to time . If you don’t plant your shade trees up front , you will face nothing but grief later
I am very curious to hear your thoughts and comments !
Only way to mitigate the risk is to add a risk premium to the estimate in the range of 1.5 to 3 depending on the nature of project and recent experience from other projects, until AI development becomes more mainstream.
LikeLike
Hi Vijay,
Very nice thoughts and points.
I would like to mention that setting client expectations and analysis (resource, requirement, Impact, backup plan, deployment) from PM is very important. Over commitment and to say ‘Yes’ to client by setting over expectations always make sick a PM (either by client or follow-up from internal management or taking too much aspirins because of client) which also results over pressurize team, unhappy management and many others unhappy areas.
PM should prefer to work w.r.to module deliveries on short intervals based on complexity of module to get well in advance UAT feedback by pushing clients to do UAT. Some clients theoretically right that there will not be any bugs but practically few might be there and PM need to ready with some buffer. Some clients put dedicated QA teams, some don’t and results are clearly visible with great difference.
Sometime client is also on wrong track and to analyze the requirements and put PM expertise always add great value.
For example just to add that apart from AI projects in finance domain, in payment industry I have seen clients who are migrating to chip-in credit cards and sometime they do mistakes to buy chip devices which don’t have good firmware, don’t have good support and most important do not clear certification at later stage which is a big loss and headache for developers to integrate broken API & firmware’s, so in every project PM need to think 1 step a head whats going to be happen with the end product, in other words risk analysis and end product delivery, end product benefits to client are also important. Happy customer –> Happy PM –> Happy Team –> Happy Management –> Happy PM.
I am sorry if i am going too much off track 🙂
LikeLike