Skip to main content

Smart devices, a cohesive system, a brighter future


If you need a reason to feel good about the direction technology is going, look up Dell Technologies CTO John Roese on Twitter. The handle he composed back in 2006 is @theICToptimist. ICT stands for information and communication.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff.

“The reason for that acronym was because I firmly believed that the future was not about information technology and communication technology independently,” says Roese, president and chief technology officer of products and operations at Dell Technologies. “It was about them coming together.”

Close to two decades later, it’s hard not to call him right. Organizations are looking to the massive amounts of data they’re collecting and generating to become fully digital, they’re using the cloud to process and store all that data, and they’re turning to new wireless technologies like 5G to power data-hungry applications such as artificial intelligence (AI) and machine learning.

In this episode of Business Lab, Roese walks through this confluence of technologies and its future outcomes. For example, autonomous vehicles are developing fast, but fully driverless cars aren’t plying are streets yet. And they won’t until they tap into a “collaborative compute model”—smart devices that plug into a combination of cloud and edge-computing infrastructure to provide “effectively infinite compute.”

“One of the biggest problems isn’t making the device smart; it’s making the device smart and efficient in a scalable system,” Roese says.

So big things are ahead, but technology today is making huge strides, Roese says. He talks about machine intelligence, which taps AI and machine learning to mimic human intelligence and tackle complex problems, such as speeding up supply chains, or in health care, more accurately detecting tumors or types of cancer. And opportunities abound. During the coronavirus pandemic, machine intelligence can “scale nursing” by giving nurses data-driven tools that allow them to see more patients. In cybersecurity, it can keep good guys a step ahead of innovating bad guys. And in telecommunications, it could eventually make decisions regarding mobile networks “that might have a trillion things on them,” Roese says. “That is a very, very, very large network that exceeds human’s ability to think.”

Business Lab is hosted by Laurel Ruma, director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.

This podcast episode was produced in partnership with Dell Technologies.

Show notes and links

Technical Disruptions Emerging in 2020,” by John Roese, Dell Technologies, January 20,2020

The Journey to 5G: Extending the Cloud to Mobile Edges, an interview with John Roese at EmTech Next 2020

The Fourth Industrial Revolution and digitization will transform Africa into a global powerhouse,” by Njuguna Ndung’u and Landry Signé, Brookings Institution, January 8, 2020

Full transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma. And this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is artificial intelligence. The amount of data we create increases exponentially every day, and this means we need to process it faster and protect it better. This is where AI comes in, from 5G to edge computing and quantum computing. The future is dawning, and AI is real.

Two words for you, AI-driven applications.

My guest is John Roese, who is the president and chief technology officer of products and operations at Dell Technologies. John joined Dell EMC in the fall of 2012 and was instrumental in shaping the technology strategy. He is a published author and holds more than 20 pending and granted patents, in areas such as policy-based networking, location-based services, and security. This episode of Business Lab is produced in association with Dell Technologies. John, thank you for joining me on Business Lab.

John Roese: Great to be here.

Laurel: So back in January, you wrote about three disruptive technologies emerging for 2020. Quantum computing, domain-specific architectures, and 5G. We’re halfway through 2020. So what do you think, were you right?

John: Well, I think covid-19 changed timelines, but I don’t think it changed any of those three. Those three are clearly moving forward. Quantum is a slow, complex journey, but what we’ve seen is breakthroughs this year. We’ve seen kind of the vacuum-tube era of some very rudimentary quantum supremacy starting to materialize. And I think I said in that blog that it’s going to be a long journey—don’t expect it to disrupt the world tomorrow, but the physics are sound and eventually we will have the breakthroughs. And I think we’re continuing down that path. Domain-specific architectures are accelerating. We track 30-plus new semiconductor technologies used to accelerate compute of various workloads, including AI-ML [machine-learning] workloads specifically. And we’re, if anything, seeing more emerge. They’re now spreading out to the edge, and so clearly that’s occurring.

And then on 5G, one of the nice things that’s occurred during covid-19 crisis is people’s acknowledgement of the need to be hyperconnected, to be able to work wherever you need, to be able to get health care whenever you need it, to be able to have a logistic infrastructure that works much more autonomously. And I think one of the big takeaways has been, we need better wireless, we need new advances in mobile connectivity. And if anything, I think the appreciation of the wireless industry and wireless technology as a foundational component of digital transformation has become significantly greater in the last three months. So all three of them hold, two of them just continue on. But the third one, 5G, definitely has been accelerated. And just the interpersonal awareness out in society has just gotten better, which is a good thing for technology.

Laurel: Just to press that 5G question a little bit more, I feel like computing companies are paying more attention to 4G, now 5G. Is that because every company is now a telecoms company, more or less? Everyone needs to know what’s happening with wireless.

John: Yeah. Yeah. I think there’s two answers to that. The first is that it’s not that everybody’s becoming a telecom company. I think that we’re realizing that if you really want to digitally transform your industry, or your function, or your society, you don’t do that in a data center. You do that out in the real world. The data centers are important; clouds are important, but the actual data is produced and consumed out in the real world. It’s in hospitals, in cities, in factories, in your home. And in order for that to work, you need a better connectivity fabric. And so people have realized that all of the clouds in the world, and all of the edges in the world, and all of the digital transformation in the world, if they’re isolated silos without a robust digital foundational connectivity network, they’re not going to work.

And so suddenly people who weren’t that interested in telecom are suddenly very interested because they’ve realized you can’t have an edge if it can’t connect to a core. And if the edge can only be in three places as opposed to where it needs to be because it’s got the wrong connectivity, your entire digital transformation, your smart factory initiative, your smart city initiative just falls apart. So I think there’s been an understanding and an urgency of how important networking is that’s raised visibility.

The second though, is that telecom as an industry is moving toward the cloud and IT world. Everything about 5G tells us that it will be built not as legacy telecom, and I have some history in legacy telecom, it won’t be built the way we built 3G and 4G. It’s going to be built in the cloud era. It will use open hardware, software virtualization, containerization. It will be heavy consumers of AI and ML technology, it just looks more like the stuff that most of the US technology industry is focused on. And so we’re not only going to be big consumers and we have a lot of dependency, but the actual technology that you use to build a 5G and beyond system is going to be much more dominated by IT and cloud technologies than legacy telecom. The reality is it will still have some telecom functionality, but this is pulling companies like Dell and many of the cloud companies into the 5G world. Not just because it’s interesting, but because we are necessary for it to be delivered in the right way.

Laurel: I feel like now is the perfect confluence for you specifically and your background, because to have someone who is so well-versed in the telecoms industry, and then also with cloud and all the other technology, you’re really pulling it all together into one place and one cause. And that to me seems like the perfect place for 5G to really explode, and again, to take people into that mesh thinking and away from these silos where you have your telecoms company here, and then you have your other computing company here, et cetera. How does this change again with covid and the edge now extending to people’s homes and out of the office?

John: Hey, by the way, just as an aside, my Twitter handle is @theICToptimist. And if you don’t know what ICT stands for, it is information and communication technology. And that goes back to, I think 2006 is when I joined Twitter, a very long time ago. And the reason for that acronym was because I firmly believe that the future was not about information technology and communication technology independently; it was about them coming together. So, here we are almost 20 years later, and yay, I think we were right. As we think about 5G and edge, edge is still early. We haven’t really built the smart things that we want to build. For instance, we don’t have automated delivery drones flying over our cities intelligently knowing how to bring us our goods and services without killing anybody.

Those are still in front of us. And we also don’t have self-driving cars, we don’t necessarily have smart cities, we don’t have really smart factories yet, but we have early indications of it. And we have enough evidence when we look at the early waves of “smartifying” the world, that one of the biggest problems isn’t making the device smart, it’s making the device smart and efficient in a scalable system. And so what we’ve discovered is, if you expect the device to be a standalone, fully self-sufficient, hyper-intelligent entity, you won’t have enough power to make it do whatever it’s supposed to be doing. The smartest car in the world, if it has to drive around a five megawatt reactor because that’s how much IT it’s going to use, is not going to be a very good car. And so edge has materialized, not so much as just an interesting place to do IT, but as an offload for the smartification of the world.

So we’ve already seen examples with things like augmented reality [AR]. Some of the first 5G edge examples are actually using augmented reality acceleration in the edge compute layer. And the idea here is you have a mobile device, a cell phone, AR goggles, whatever it is, that instead of processing all the artifacts, instead of doing all the video processing on the device, they actually push about 80% of that into an edge compute layer that has a button in compute and all the power it could need, and the result of that is that now you have a highly efficient AR experience on a mobile device that’s getting the assistance from the edge, but more importantly, it actually exceeds its original capability because it’s tapping into effectively infinite compute. So it has more artifacts, better video resolution, greater color depth.

These are things we’ve already demonstrated, which tell us the edge isn’t just a layer of IT, it’s one of the key components to allow us to bring intelligence to connected entities everywhere without putting the entire burden on the entity. And that collaborative compute model is likely to be the most powerful tool we have to solve this problem of power plus functionality plus cost, and getting the right combination between them. So it is early, but we’re now seeing enough evidence that that is the pattern, which makes edge even more interesting and actually more viable because we know that the device by itself isn’t the answer, the cloud by itself isn’t the answer. It’s this combination of cloud infrastructures plus edge infrastructures plus the devices all working together that gets us that better balance between cost functionality, feature set, and deployment models.

Laurel: So speaking of technology’s becoming better and smaller and faster, that also means at the edge, your device that you have in hand is part of that mesh and network. So the AI can extend out from the cloud to your device, and devices can be made smarter because of that, because the compute power is now in your hands.

John: Yeah. No, absolutely. In fact, I gave this example a couple of years ago where I was talking, we’ve done a lot of work in autonomous vehicle activity around the world. We work with most of the major automotive manufacturers, and we’ve learned a ton. But one of the examples I gave a long time ago was, we know that the car itself is going to be quite smart. A modern, autonomous vehicle has custom AI processing in it; it does a lot of really interesting sensing and analysis. And it does have to be to some extent self-driving, because for life-safety reasons, you don’t want to have the network go down and the car drive off the road. So let’s assume that’s all true. So, well, what would you do if you were now a car that was relatively self-sufficient, but was attached to a road that had edge compute associated with it? And the example I gave was, if you look at these cars, they have things that can sense the car in front of them, they can sense the road surface.

They can carry with them a lot of data that tells them how to predict the road surfaces and to adjust their suspension. They even have some things that can understand traffic patterns in kind of non-real time. But imagine if all those cars started to not just share their long-term data, but their immediate view of the world, their point cloud of the data around them in real time, and they shared it to nodes that were adjacent to them in real time so that your road itself had a master image of the real time understanding of all the cars. And the result of that was that if your car, when it was trying to figure, how shall I adjust my suspension for what’s coming next, didn’t just do it based on a database or what it could see, but it could ask the question of, what does everybody else see? And now it could predict things. Same thing for safety. It didn’t just have sensors that could see in front of it, but it could see what the cars, in front of the cars, in front of the cars could see.

And so the example I gave is, imagine your heads-up display as the user inside of a semi-autonomous or autonomous vehicle is showing you what the car can see, but the minute it can tap into this intelligent road with this edge compute layer, that heads-up display can see around corners. It can see things you can’t see, it can see what other people can see. And now your visualization of the real world in real time becomes just a much greater view of everything around you because of that collaborative compute model. That’s an incredibly powerful tool that isn’t possible if the device by itself is trying to solve this problem. And you can transpose that into many other industries, but the autonomous-driving one is fascinating because there you will have a very smart and robust device that can operate all by itself, but it operates better in many dimensions when it can tap into the collective consciousness of all of the cars, and all of the roads and all of the things around it in real time.

And the only way to do that is not by sending messages across the internet to the other side of the universe into a public cloud, but by getting this real-time responsiveness of tapping into an edge compute layer. So we think that pattern is going to become one of the big breakthroughs that, when you don’t have to cross the internet, and you can get this collective understanding in real time local to you, even fully autonomous devices get better, and they get more interesting and they tap an entirely new business models.

Laurel: So I read an interesting part of your perspective is that, where we are with AI right now, it makes our life better, maybe 5% to 10%, but we’re really far away from the Terminator. So even just with the autonomous vehicles, we’re talking about things are incrementally getting better every time something new comes out, but we’re far away from the cars driving themselves yet, but that is an end goal. In the meantime, though, that 5% to 10% is still significant.

John: Oh, yeah, absolutely. I mean, now cars are an interesting game, because depending on who you ask, we might be a month away from a fully autonomous level-five connected vehicle, and some people would give you a different answer. I can give you my opinion. But in general, the reason I made that comment is, when you look at applying machine intelligence to anything, whether it be a self-driving car or a business process or user experience or whatever, gaming, there are two things you can think of as success. One is that you completely revolutionize it. You turn it into something that has never before been contemplated, a level-five self-driving car. That is a big, big jump, and it’s worth taking that jump—it just takes a very long time to get there.

The other way that you look at machine intelligence is, it is an augmentation to the cognitive tasks that human beings typically do. When you have to think, right now you’re on your own. It’s up to you to make that decision. Very rarely do you get much help on the thinking side. You might get a lot of data, but you have to sort through it. The recommendations don’t really come from technology; you have to figure it out. So what we realized early on, is by careful application of machine intelligence to places where human beings have to take data, understand it, and make a decision, we can actually accelerate that process or make it higher-precision, less prone to error. And so, as we took apart, whether it was the supply chain process of Dell, or the service process of predictive maintenance, or whether it was radiology systems inside of health care, where you’re just trying to find something in the image, those 5% and 10% improvements of just getting the process to work a little better were far better than you could ever get with human beings because the human beings were the baseline.

And every time you improve something like a supply chain by 5% or 10%, or I don’t know, radiology by 20% or 30% more accuracy in detecting things like cancer and tumors—that’s a very powerful outcome, not just to an individual, but potentially to society. And so one of the messages we’ve been giving our customers and we’ve tried to make clear to people is, we’re not opposed to the big breakthroughs, we think those are great. But there’s so much more we can do with this technology to take any place in every process that we have that implies that human beings have to make decisions, and augment them with machine intelligence to make those decisions more accurate, more speedy, more likely to have a positive outcome. And I use the word “any” because it really is anywhere that human beings have to make a decision, we can make that decision better with the careful application of machine intelligence.

And that seems like a really good thing to be doing right now, because it doesn’t require massive breakthroughs—it’s technology we have today. And every time we do it, the process gets better, the cost structure gets better, the outcome gets better.

Laurel: Speaking of better outcomes, we’re still early in this pandemic, but do you see specific opportunities surfacing with artificial intelligence specifically? As you just said, an obvious one would be health care, but there’s just so much data.

John: Oh, yeah, there’s an infinite number. Basically the way to look at it is, if you’re wondering where the use of machine intelligence to improve the effectiveness and efficiency of human behavior makes sense, just look anywhere in the coronavirus period where we ran out of people, where the people just got overwhelmed. And health care is a great example. There are early examples of, hey, we just didn’t have enough nurses to deal with the surges going into these hospitals. So I don’t know. We have the patient sensorized—why don’t we send all that sensor data to a machine intelligence that doesn’t replace the nurse; it just gives the nurse a more complete view of the patient by preprocessing, organizing and making recommendations, so now a nurse can maybe monitor 30 patients as opposed to three? That scales nursing, which is a very powerful tool. We’ve obviously seen it in terms of clinical care where if it’s a medical procedure, I mean, people dealing with a pulmonary specialist, we had a lot of breathing problems. Wouldn’t it be nice if we could make their life easier by having, I don’t know, maybe our ventilators be a little more self-regulating, a little more self-tuning? We’ve seen that kind of behavior occur, and we’ve realized that there are places where we just don’t have enough people to get the work done.

The other example, totally other end of the spectrum in covid, was logistics and delivery. When suddenly you just don’t have drivers or you can’t have human contact, but still people have to get their deliveries, they have to get groceries, they have to move stuff. Well, that seems like the use of autonomous vehicles or semi-autonomous vehicles or AIs to better do route planning would have a huge implication of making that particular function more effective.

And so, the aha moments in covid weren’t necessarily surprising when you understand them, but you can find them anywhere where we realized that human capacity has a finite boundary. And whenever we run into a place where humans are overwhelmed doing a task, and the task involves making decisions, thinking through data, trying to get something done, those are good places for us to apply machine intelligence so that we can scale the human being, not necessarily to replace them.

Laurel: So someday we’ll be out of covid. Where else are we starting to make AI real?

John: Well, I think everywhere, to be perfectly honest. There really isn’t an industry or a space that isn’t attempting. Now we have challenges sometimes. Like in health care, it’s hard to put AI into health care because it’s a regulated industry; the timeframes are very long. So we’ve seen breakthroughs, not in health care, but in wellness. There’s some pretty cool things. Like there’s a ring called an Oura Ring, which basically monitors your temperature and a bunch of vital signs. It’s a wellness tool; it’s not a healthcare tool necessarily right now. But because it can use advanced machine intelligence, it can make interpretations, we’ve discovered that that ring can give you a pretty good early warning that you might be coming down with something, or before you know you’re sick, it may tell you you’re about to get sick, which is pretty powerful tool and pretty innovative.

But across the spectrum, we’re seeing the application of machine intelligence just be a natural point of technology’s evolution. In the 5G world, for instance, here’s a good example: we can’t build the 5G networks that we’re going to need with human intervention everywhere. They’re just too complex. And so candidly, we expect that 5G and beyond, the hallmark of future telecom infrastructures will be automation. Will be AIs making the decisions around spectral efficiency, and bandwidth tuning and all kinds of things, because there’s just no way a human being can run a hundred-million subscriber network, and that’s before we put all the things on it. It would be possible in the US alone, some of these mobile networks 10 years from now might have a trillion things on them. That is a very, very, very large network that exceeds human’s ability to think.

And so we’re already seeing the injection of machine intelligence into telecom networks, large-scale data centers, automating infrastructure in a way that allows the human beings to keep up. And then as you bounce around, we have initiatives going on in the freight and logistics space where people are realizing, hey, there’s a lot of goods and services moving around, but they move kind of slowly and clunkily. So what if we try to really tie together and fuse intelligent forklifts, plus the visual surveillance, and object mapping and algorithms to decide how to pack a truck properly or how to load a plane properly or how to move things through that logistic infrastructure in a place where it kind of slows down because there isn’t really a clear pattern there? Well, AI’s great when you don’t have a clear pattern. Let the AI figure out the pattern and develop a set of logic around it.

So it is universal. It’s very hard to find a place, if you ask the inverse of the question, where people aren’t using machine intelligence, other than places where the regulatory regime is out of date have become impediments for people to adopt these types of technologies more aggressively. And so, one of our burdens as an industry is to work with the regulators to update these regulations so that we don’t create a situation where the regulation prevents the natural progression of technology that moves human progress forward.

Laurel: Yeah. And I guess you would think regulation and security kind of go hand in hand, especially when the bad guys have access to the same tools as you do building the network. So how do you start also then securing all this amazing data?

John: Yeah. Well, I mean data’s just data. You can use it for good or bad, and unfortunately it actually is incredibly valuable and so it becomes a giant target. Security compromises don’t happen because someone’s bored; they happen because there’s a target worth stealing. And our digital environment, the currency is the data, the insights, the models—these things are the real valuable tools. And the reality is they will be a target. So we have to really think about how we’re going to secure these environments in a maybe a different way than we did historically the physical world. To be very blunt, the current approach to security just won’t work, because our current approach to security is we have a thing that runs independent of security, and then we have things that attack it, and then we create security technology to counteract those things that attacked it.

The problem is, it’s an unwinnable battle, because candidly somebody can just come up with a new way to attack it, and then the security industry has to come up with a response to it. And that is not a good way to run an organization or a technology. And so our belief is, we have to shift to our model where we’re really looking at intrinsic security, that we’re building the security into the thing that we are protecting, whether we’re doing that in a cloud environment, or we’re doing it in a network environment. But the bottom line is we have to get away from this idea that security happens as a reaction to an external event. Instead, it needs to be something intrinsically built into the actual system and its architecture.

That sounds like marketing, but the bottom line is, it is not a winnable battle if we’re going to have a security product for every security problem. We have got to have architectures, and infrastructure and systems that are not built to react to any particular security problem, they are built to respond to any threat. They have a comprehensive understanding of their identity. They have the ability to control access and understand behaviors within them. I’ve always argued that in the security world there’s kind of three things you deal with. The known good, the known bad, and the unknown. And today, most of our security principles are around trying to block the known bad, which is unwinnable, and trying to sift through the unknown, but they don’t do that very well. And interestingly enough, the known good we rarely actually build for that. Now my argument is, we need to understand what the known good behavior is, and we need to lock that down and make sure that that happens. We need to prohibit the known bad, that’s an obvious statement. But it’s the unknown where all the innovation is going to come from.

And that brings us back to things like AI and ML. The idea of using machine intelligences to sift through the unknown to very quickly determine, is it a known bad or a known good? Which camp does it belong? And do that faster than the other side can do it because we have better tools to understand behaviors, and to have the frameworks built into the infrastructure themselves. The most important thing is, even if you use AI, to understand new threats and to decide if they’re good or bad, if it’s done outside of the infrastructure, you’ll still have to deploy another product to react to it. If on the other end the infrastructure is the product that reacts to the security events, if it’s literally just telling the infrastructure, change your service chain in your SDN, change the virtualization layer, change your Kubernetes manifest, but you’re not deploying any new technology—you’re just imposing new behaviors on the infrastructure as it exists. Then all of a sudden that brain can actually go into production much quicker than having to deploy a whole new product or a whole new system.

So, but security is one that, here’s the bad news, it’s never going away. We are constantly in a security dynamic race with bad guys and good guys. But I think we can move a lot faster if we get out of this mode of thinking that for every security problem, there’s a product. It has to be that our infrastructures are the reactive mechanism, and we use machine intelligence aggressively to try to understand when to react. But that reaction does not require replumbing the entire infrastructure, changing our architectures to react. If you get into that mode, you can move faster than the adversaries, and you have a system-level intrinsic security approach, which is a big shift for people, but logically the only place that we’re going to be able to get to any kind of success as we start thinking about the scale of this future in front of us.

Laurel: I like the phrase, “machine intelligence,” because that really is what it is. It has to be throughout the entire system, whether you’re building a good offense or better systems to react quicker and faster. It is not just artificial intelligence, it’s not just machine learning. It actually is a combination of the two that allow you to do that much more. And also puts a lot of expectation and burden on the people creating these systems to work in a certain way. So I know you are on the board of Cloud Foundry and open source is important, but that is sort of the root of open source, right, is thinking about how we all can work together and sort of democratize this technology in a way that everyone who pitches in does actually gain something in the end.

John: Yeah. No, absolutely. I mean, I think, open source methodologies—this idea of community-based development, by the way, is not new and it’s not unique to open source. I’ve done work in standards bodies for 20-something years now. And if you go into the IEEE [Institute of Electrical and Electronics Engineers] or the IETF [Internet Engineering Task Force], it is a community. It’s a little slower-moving because it has more Robert’s Rules of Order and approaches. But the idea is, I’ve always been a believer that the best technology is one that’s built in the light of day, that it is not one smart person in a back office somewhere coming up with the answer to the problem. You throw your problem out there, and you as a community work through that problem. You have dissenting voices and consensus.

What’s interesting about the current open source world is, versus standards bodies, the traditional standards bodies that move very slowly, it could take a decade to get a standard out in the IETF, open source just moves faster, it’s eliminated some of the bureaucracy. It says, we’re not going to presuppose how you do the work, but we are going to insist that it be the consensus of the community, that the community move forward on this journey.

Now we do have a problem with open source today, and that is that open source still has a silo problem. The open source projects typically are not system-level problems. They are, we have a group going off and building Kafka, or we have a group going off and doing Hadoop, and we have a group going off building Kubernetes and CNCF [Cloud Native Computing Foundation]. And those are wonderful. But the only way this really works is if those open source projects start to come together, because no one solves a digital outcome with any one of them. Kubernetes, as good as it is, does nothing by itself, to be perfectly honest, in terms of business outcome. There has to be a workload on it, there has to be a data stream, it has to run on an infrastructure.

And so, I think there’s kind of two takeaways from the open source world. First is, community-based development, whether it was done in a standards body or open source, is the fastest way for people to figure things out, and we should embrace it, and expand it and use it wherever we can. It just works better. The second though, is that even if we do that kind of work on a particular component, we have to take the principles of that kind of thought process of looking at things from a broader perspective, an open-innovation perspective, and apply it into system-level architectures. One of the best examples of that is something we just touched on earlier, which is 5G. There is a huge debate in the world right now about how 5G should be built. There is the Legacy 3GPP [3rd Generation Partnership Project] traditional approach that says, ah, it’s good to have componentry, but we’re going to be very, very structured and disciplined, and there’s not going to be a lot of room for innovation because we’ve decided what 5G is. There’s the answer; go implement it.

I disagree with that approach because it was built based on technologies that are long since obsolete. There is a new way of thinking about it that says, hey, we still want to get to the same outcome, we still believe in the same interfaces and the same standards, but how you actually execute it should be open-minded about how you do virtualization, and how you link to hardware and how you open the radio-access network up. And that level of thinking is squarely in how people think in open source communities and in modern software development projects. And so, we’re seeing this interesting collision between, let’s call it the open-ecosystem world and the telecom world, really causing a lot of stress and interesting evolution of the 5G ecosystems. But to me, I think it’s a very positive outcome, because that technology is so important that we better do it the right way. And we have abundant evidence that says open source, open ecosystems, open systems are actually a faster, better way to get to a superior outcome for a lot of things that people have tried to do in other ways.

And so, we’ll see how it plays out, but open source as a concept and a community development model has influenced far more than just the projects that the open source happens in.

Laurel: And I love that, that kind of energy and excitement, and especially, again, confluence. We’re bringing everyone together to make this change happen. Speaking of, how do you do this at Dell? How do you strategically think about AI and lead this enormous company? So many different teams, and you have wonderful people and wonderful teams. But how are you thinking about this strategically and how are you advising other leaders to think about AI and machine intelligence in a way that makes sense, in a way that perhaps is open, which challenges the way they’ve done business before?

John: Yeah, yeah. And a general answer to that question, at Dell, we are an enormous company covering almost every aspect of infrastructure, from bare-metal hardware all the way up to application stacks and developer environments. We’re just extremely big and extremely broad, which is part of the value proposition of the company. One of the things that we realized early on though, was that when you’re that big, you do have to have kind of governing principles. There has to be kind of a framework around this. And so we are very disciplined around having a strategy, having a North Star, understanding clear roles and responsibilities. But making sure that we understand that implementations, when you do something big like edge or cloud, will happen in many places. But if you don’t have a structure where everybody kind of understands why you’re doing it, what are the first principles you’re going to struggle.

For instance, just recently in edge, we’ve made some decisions about how Dell positions edge. And they’re high-level, but they frame how our developers think. For instance, we believe that edges are not standalone entities. Edges are extensions of cloud-operating models. You don’t build an edge to build an edge. You build an edge to extend your cloud architecture, whether it be a public or private cloud environment or a hybrid, multi-cloud environment, out into the real world. And that sounds very subtle, but if you don’t make that decision inside of a company, then you’re just rolling the dice to see if your teams build more silos or actually build an extension of your core value proposition, which is to build a multi-cloud. And so by having that North Star, it’s clear. Other examples in edge, we made a decision that we believe edges should be platforms. Now that sounds very obvious, except most edges today are bespoke silos for a specific workload.

Somebody decides I want to take my AI framework out into a factory, therefore I’ll build an edge. Even some of the public clouds have built effectively very narrow bespoke silos that extend just a few features of their public cloud. Nothing else. Now, when we started to look at it, we said, wait a minute. Edge is a capability of an end to end experience. You will have many end to end experiences. And if you have to build an edge for every single one of them, you’re going to make the edge market look a heck of a lot like the security market, which we don’t want to do. Security markets, if you go into a security data center of a enterprise, you find a rack of gear. Every piece of gear has a different logo on it and does one thing. We don’t want edge to look like that. So we made a decision that edge should be a platform. That what we should build is horizontal capability. We should acknowledge that that edge might be used for an AI task, it might be an industrial automation task, it might be a video surveillance task.

We need to have maybe several different edge architectures to accommodate different approaches, but you’re not trying to build a single, vertically specific silo for every edge problem. You’re trying to build a platform that allows the customer to solve their edge problems today. And when they come up with their next edge problem, they just have to push code into the platform and then work at the edge as opposed to build a new edge. Now, those things, what I just said, hopefully are completely obvious, but most people don’t make those decisions. So at Dell, we do. We make first-order decisions about what is our philosophy? How do we think about things? We then turn them into architectures that describe exactly the technical work that needs to be done, but we don’t go so far as to dictate down to the implementation and the product exactly how they’ll innovate to get to that outcome. That’s the magic of having great R&D teams. They go off and they figure out the best way to build the product. They are innovative of that respect, but it all comes together into a system.

In fact, today I lead efforts to basically make sure in these six huge areas inside of Dell, that we are consistent in our architecture, that we’re navigating them as a company at the system level. They include the evolution of cloud, the evolution into the new data ecosystem, of data in motion and how we play there. They are edge and how we extend IT out to the real world. They are AI and ML, which is how we turn the entire technology ecosystem to be a different division of labor between people and machines, around the thinking tasks. They are 5G, this big inflection of the telecom, and IT and cloud world smashing into each other. And our view is it really needs to be cloud- and IT-dominated, and it needs to be a modern infrastructure. And then lastly, around security, and we touched on that with intrinsic security. Those are giant things, but to answer your question, at a company like Dell, or any company, you need to know what your North Stars are, what are the things that are coming at you?

In our case, it’s those big six. You need to have a point of view that describes first principles and a framework that describes the playing field, and then you need to have a structure that operationalizes that to get that message into your development community, into your product groups, into your service organization, in your marketing teams, so that they’re all working across the right playing field with the right, let’s call it script. But you don’t want to be so prescriptive as to prevent them from innovating, and how they implement and coming up with different pacing. It’s that balance between freedom of movement of the developer, and having a framework, and an architecture and a North Star. You get those right you can navigate technology. But if you miss the North Star, you miss the framework or you don’t have freedom of movement on innovation, you’re not really going to execute well. So for us, it’s really those three big ones.

Laurel: That’s excellent. We could spend a whole ’nother day talking about edge computing and everything else, but I appreciate your time here so much today, John. Thank you for joining us today in what’s been a fantastic conversation on the Business Lab.

John: Yeah, thanks very much for having me.

Laurel: That was John Roese, the president and chief technology officer of products and operations at Dell Technologies, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can also find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

The show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. The Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.



Udimi - Buy Solo Ads from MIT Technology Review https://ift.tt/39FPUh2
via IFTTT

Comments

Popular posts from this blog

9 VCs in Madrid and Barcelona discuss the COVID-19 era and look to the future

Spain’s startup ecosystem has two main hubs: Madrid and Barcelona. Most observers place Barcelona first and Madrid second, but the gap appears to close every year. Barcelona has benefitted from attracting expats in search of sun, beach and lifestyle who tend to produce more internationally minded startups. Madrid’s startups have predominantly been Spain or Latin America-focused, but have become increasingly international in nature. Although not part of this survey, we expect Valencia to join next year, as city authorities have been going all-out to attract entrepreneurs and investors. The overall Spanish ecosystem is generally less mature than those in the U.K., France, Sweden and Germany, but it has been improving at a fast clip. More recently, entrepreneurs in Spain have moved away from emulating success in pursuit of innovative technologies. Following the financial crisis, the Spanish government supported the creation of startups with the launch of FOND-ICO GLOBAL, a €1.5 billi...

How to Stay Creative and Keep SEO in Mind

Information Technology Blog - - How to Stay Creative and Keep SEO in Mind - Information Technology Blog Search engine optimization (SEO) refers to customizing your website’s content to ensure that web browsers give your website a high SEO score. The sites with the highest SEO scores are featured on the search engine’s first page of search results for relevant searches.  71%  of the click-throughs happen with articles listed on the first page of results on the search engine. This means that if your website’s article is the second (or third, or fourth page), it’s less likely the search user will even see your article. You want your article to be ranking as close to the top of the first page of results as possible. In order to have a good SEO score your site’s content needs to feature keywords and relevant phrases. It must be optimized for easy navigation between pages. It also needs to be referenced via external links that drive traffic to your site. Incorporating all of t...

Everything we know about HHS Protect, a secretive government project with Peter Thiel's Palantir that helps brief Trump's coronavirus task force

A secretive project at the US Department of Health and Human Services is working with technology companies to collect and analyze data related to the novel coronavirus .  Dubbed "HHS Protect," the effort tracks information from around the country about coronavirus case numbers, hospital capacity, and even supply chain issues.  HHS uses Palantir Technologies , a data firm cofounded by Peter Thiel, to distill that information for the White House coronavirus task force. Visit Business Insider's homepage for more stories . A secretive project at the US Department of Health and Human Services is working with technology companies to collect and analyze data related to the novel coronavirus.  Dubbed "HHS Protect," the effort includes roughly 2.5 billion pieces of data from healthcare providers, government officials, and labs around the country about coronavirus case numbers, hospital capacity, and even supply chain issues.  The goal is learn about the progress...