Adopting machines into the workforce can be a challenging task. It’s important for leaders to communicate to their people about the change in the work culture without instilling fear about their relevance in the future of the organization. If this is done right, an organization could achieve a true merit of a mathematical organization that is using the power of data/analytics to achieve sustained success.
Josh Sullivan and Angela Zutavern, author of The Mathematical Corporation have a lot to say about this topic through their book. Let’s dive in.
What new capabilities are possible when the mathematical corporation merges machine intelligence and human intelligence?
The problems that the mathematical corporation can solve range from day-to-day operational functions that everyone performs all the way to curing cancer, living on Mars, solving world hunger, and stopping human trafficking.
Such lofty and elusive goals can be difficult people to relate to, especially if they’re just thinking of their problems at work. A more immediate example would be a shift in how companies move goods, physical products and people around—without relying on humans.
Merging human and machine intelligence can also help us make decisions on where to invest, and where to spend resources. We’re making those decisions today, but we’re doing it primarily by instinct and gut feeling. In the future, we’ll all be making decisions using machine intelligence, rather going on instinct.
If corporations start relying more on machine intelligence, won’t that mean fewer jobs for humans?
Right now, we waste a lot of our time on rote work that doesn’t require a lot of critical thinking. A good leader should think of machine intelligence as something that can relieve toil and automate some activities—not the entire job—which frees employees up for higher level cognitive thinking. There shouldn’t be a wholesale job replacement, though that might happen in small pockets depending on the sector.
Machine intelligence won’t mean fewer jobs for humans; it will mean better jobs that will be different than they are today. It will change how we practice medicine, for example. Doctors will become interpreters of models instead of relying on their own instincts and knowledge of symptoms and treatments.
We hear a lot of doom and gloom, but machine intelligence will actually create jobs—as well as industries that never existed before. When people can spend their time on higher value tasks, the overall number of jobs will increase as new businesses and industries emerge. And if you’re in a job that’s going to change, better to know now and embrace the transition rather than passively waiting for it to happen.
Do the sacrifices in privacy outweigh the benefits that may emerge when mathematical corporations use our data?
In the future, there will be a whole different model of data ownership and privacy, in which people will have much more choice and control of how their data can be used—for example, for a nonprofit cause they believe in, and not by a retailer that they don’t like.
Currently, we’re in an interim space in which people don’t have a lot of insight or control on how their data is used. Companies have to make a lot of pioneering judgment calls about what data they do or don’t share, and what is or is not ethical. But we’ll eventually arrive at a more mature state of data ownership and ethics.
We’re in an ethics and privacy black hole right now—it’s a debate that’s not part of most conversations in any meaningful way. Instead, we’re driven by attorneys and end-user license agreements.
We need the Marshall Plan equivalent of ethics and privacy, because the foundations of tomorrow’s organizations are going to be data-hungry and analytics driven. We need to admit that we’re really far behind. We have to say, “as people, we’ve got to figure this out before we get too far down the road.” Government, industry, and nonprofits need come together to define and create the foundation for it.
How do you reconcile this emphasis on future questions with needing to meet an immediate financial bottom line and being answerable to shareholders?
Shareholders will need to invest based on the future. They will make a shift in how they invest, based on future predictions, not the past. Corporations will naturally want to fall in line with that. It’s not that the corporate world would be out of sync with investors.
It’s a short-term versus long-term argument. Of course, there’s a quarter-by-quarter view. But a leader has a fiduciary responsibility to think about being a steward of the long-term strategic direction. There’s an opportunity cost of not doing it, because you’re preventing your organization from jumping off the standard growth curve, usually 1% to 4%. It you don’t try to go on this journey, you stay on that line and may never realize there’s another line of performance you’d be denying your shareholders access to.
For example, consider AirBnB versus Hilton. Hilton is a worldwide business, owns lots of property, and employs hundreds of thousands of people doing a pretty rote set of processes. AirBnB, in five years, has the same valuation as Hilton, because they reinvented the business model. They don’t own property; they’re just an accommodations provider, because they own the platform to do that. The CEO of Hilton said six years ago that no one wants the AirBnB experience, and now they’re the same valuation.
The future of work means differently to each one of us: some see it as more technology and less human, some expect a more humanized space and some others imagine it to be a no-workplace world. In our journey to unwrap FutureofWork, Work2.org invites leaders from various industries to help our global community to understand what the posterity holds for workers, leaders and organizations. While our team is busy at bringing this fresh ideas directly to you, we would appreciate our community help in making it possible. If you like what you’ve read, we would appreciate if you could spread the word within your circles and let us know if anything you want us to bring into this #FutureOfWork conversation.