As we learn more about AI, we know this:
- Neural networks based AI is very effective for many problems
- Human brain is based on neural networks
We also know that AI based systems are still evolving and have limitations
- AI based car might confuse a picture of car as car
- If AI system is taught about small set of vehicle types, it will ‘force fit’ all types of vehicles into the types known to it
In short – AI makes mistakes because the diversity of its knowledge is limited.
How about a manager sitting in a meeting room reviewing plans and status? I found that most of the times the management review is very predictable.
We exactly know how the review is going to look like. Probably the manager won’t listen, already there is (mis)interpretation of what is being presented, what actions get suggested for such situation.. Everything we know.
The big question is – in such cases, what is the value addition of the review? He is so lost thinking that he knows and is busy forming opinions too quickly. Hence, might have misinterpreted the context, wrongly prescribed some actions.
Worse – nothing changed in the outcome of the review from previous such meeting.
If you see this manager as an AI system, here could be the interesting observations:
- Probably the manager did not learn anything new and not willing to learn
- His ‘database of situations’ is very limited.
- Same jargons are thrown.
- He is unable to see the big or small change in the context across situations. Hence he would prescribe wrong actions and force wrong direction.
So, the manager is not effective at all. The diversity of experience of this manager is not good enough. More importantly, it does not seem to improve.
Only difference is that for AI system, it can be asked to learn and it will. The experienced manager did not and he will not.
This should be a warning bell for the managers. Big mistake would be to think that humans behave differently than AI systems. The limitations of AI systems are the same limitations humans also have.
Considering this, the managers can develop plan to improve effectiveness. Just by treating themselves as AI system.
This could improve the situation of work environment a lot. If it is not done, organizations know how to bring ‘Bot Managers’ in near future. These will monitor the reviews and probably score your way of reviewing. The outcome will not be very pleasant.
Hope it is good enough motivation to improve ourselves.
We think it is the opposite! We are proud that we humans update ourselves. But, yes, do we all keep learning constantly? Thought provoking article.