Last week, Agile IT sent a team to Microsoft Inspire, the annual Microsoft partner conference where we get to learn about the newest technology in the Microsoft ecosystem, both from independent software vendors as well as Microsoft themselves. Over three days, we were treated to demos and learning sessions of real-life applications in productivity, security, first-line workforce and augmented reality. Microsoft CEO Satya Nadella’s Corenote on Day 3 was one of the highlights of the week as he discussed both Microsoft’s mission and culture and the future of technology before showing off how he personally uses the Microsoft 365 productivity stack to run one of the largest companies in the world (See part 2 of our recap here).
The Need for a new Geneva Convention
Nadella spoke strongly about the need for user privacy, stating that “As technology and digital technology becomes more pervasive, we have to approach everything with the fundamental assumption that privacy is a human right.” He continued to explain how GDPR was only the first step, and how, when it comes to cybersecurity, we must work to assure that the most vulnerable populations are protected against these new weapon, and that the world in our time needs a new Geneva convention. “…and when it comes to AI, we have to have a set of principles that guide the development of AI. We want to make sure that anything that we do doesn’t amplify bias, doesn’t hijack our attention, doesn’t sway opinion. These are things where we need to not only build the tools, the technologies, but it’s also a set of design principles, a set of ethical principles that we as builders of technology need to have.”
“One of the things that we recently did is we decided that, you know what? You want to build a data center and just submerge it in the sea. So Project Natick is essentially a data center, by the way, the most interesting part of this that I love is from start to finish you can get a data center built in 90 days. That’s what I love, which is the speed with which you can start provisioning data centers like you provision virtual machines.
And since 80 percent of the world’s population lives around the sea coast, you can now start envisioning a world where you can start bringing compute power close to all the population centers as needed. And, by the way, what’s most interesting is it’s zero emissions, fully self-sufficient. And so we’re very excited about pushing even what is the conventional wisdom of what is a data center. Now, of course, computing needs will go far beyond the data center.”
Edge Computing and Azure Sphere
“Now the other piece which I’m very, very excited about is Azure Sphere. We just launched it earlier this year. It’s a very innovative piece of work where we bring a silicon design that’s secure by design, we bring an operating system as well as a cloud service to ensure that the 9 billion microcontrollers become Azure compute endpoints, because these 9 billion controllers are what are making things like thermostats or refrigerators or microwave ovens all connected.
The question is how are you going to make sure that these compute endpoints are secure and capable of doing sophisticated processing. And that’s what Azure Sphere does, and you already see this being used across a lot of very different customers.”
Human Parity in Artificial Intelligence
“Microsoft has been working on fundamental AI breakthroughs for 20-plus years, 20-plus years ago is when Bill started Microsoft Research. And, in fact, just in the last couple of years, some of the advances, especially as measured by our ability to have human parity in a lot of these perception and language capabilities is pretty stunning. And it’s happening because of the ability to provision lots of compute capability, to have lots of data, and these new techniques of algorithm promise around deep neural net in particular.
So for example, 2016 is when, for the first time, when we were able to have human parity level object recognition with that Image Net Competition, a 152-layer deep neural net that was able to show that these neural networks were able to recognize objects like human beings. And in 2017 is when we had human parity on speech recognition using that switchboard data set that’s been there for us for all time.
And just this year, earlier in January, we participated in this SQuAD Competition, which is a Stanford Q&A test for machine-reading and comprehension. So this is the ability to read a piece of text and start answering questions. I wish I had it for SAT. But that ability to be able to do machine reading and comprehension we now have human parity. And even in machine translations as late as March of this year, to be able to do translations with human parity.
For us to be able to turn every industry into an AI-first industry, whether it’s retail or healthcare or agriculture, we want to be able to make sure that they can take their data, in a security-privacy preserving way, convert that into AI capability that they get the return on. That’s really the objective.”
Conversational AI and Intelligent Agents
“One of the things that we’re also doing is to try and push this concept of language understanding or this capability of language understanding to the next level, to have the ability to do full duplex conversations. So that means two people, the bot and the human being, being able to do back and forth open domain. So that is not in one domain but any domain. That’s sort of in some sense an ultimate AI complete challenge.”
“I want to now transition to the last big shift, which is towards people-centered experiences. Fundamentally, so far, we’ve built applications which are what I would describe as device centric. We’ve really not, whether it’s the operating system or the application, not broken free of the device. And that’s what’s going to have to happen.
That means put people at the center, the relationship people have with other people, the activities and tasks that you do. That’s what’s got to drive the devices in our life, not the other way around.”