CIO Speaks – Episode 11: A Conversation with Eric Dynowski of the ServerCentral Turing Group

:: ::

Steve speaks with guest Eric Dynowski on helping organizations implement strategy and infrastructure design.

Guest

Eric Dynowski is the chief technology officer at ServerCentral Turing Group (SCTG), which offers cloud-native software development, AWS consulting, cloud infrastructure and global data center services. Eric has a comprehensive background in infrastructure design and integration (including business mergers, spinoffs, acquisitions, and start-ups), and has been credited with reducing IT budgets by millions while adding inherent value to existing departments by focusing on excellence in staff and deployment. At SCTG, he helps Fortune 500s and public enterprises leverage emerging technologies to scale, modernize, optimize, and manage their IT infrastructure.

Transcript

Steve Ginsberg: Hi I‘m Steve Ginsberg, my guest today is Eric Dynowski. Managing partner and chief technology officer at ServerCentral Turing Group. He has a background in tech and helps organizations optimize infrastructure design, strategy and implementation.

Eric, thanks very much for joining me today.

Eric Dynowski: No problem, glad to be on, Steve.

In looking at your background, one of your specialties is infrastructure as a service (IaaS). There’s obviously many ways to approach this. I’m wondering how you’re seeing the solutions play out here.

Yeah, ‘many ways to approach it’ is definitely a good, succinct description. I think that we’re seeing quite a broad range of requests coming from our customers. The one thing I think that’s common across the board in most cases is, our customers are no longer interested in managing their own infrastructure. That’s a pretty broad statement.

Infrastructure could just mean, hey, I don’t really want to be responsible for providing power, cooling, and base networking, but we want to manage our OS. Whereas, other customers say, “Hey, I don’t even want to source my hardware. I want you to provide it and lease it.” Others will say, “I don’t even want to hear the word ‘hardware.’. I just want instances to put my software on and go from there.” Then another group of folks will say, “I don’t even want to hear the word ‘instances.’ I just want to deploy my software somewhere, and containers comes into the discussion.” It’s a pretty broad mix.

I would say that across our entire customer base, our experience has been: there’s no single answer yet that fits any company that’s coming at us. It really depends on their business. It really depends on their workload. It depends on what their business objectives are. Some companies are price sensitive, and other ones are not. They have different drivers. I think that there’s still a role for the major hyper-scaler providers, like Amazon, Azure, and Google. At the same time, there’s still a place for VMware. There’s a place for Hashi[Corp]. There’s a place for all the various Kubernetes solutions. There’s a place still for bare metal.

It’s an interesting time in the environment. Things are evolving quickly, especially in the container space. That’s a rapidly evolving space. I think Cloud Native Foundation’s got a great little map of all the pieces of software that’s out there right now. It’s 150 different components, and they’re all under massive development right now. This space is just exploding. People are doing things in containers that two years ago you would have laughed at. Yeah, it’s a dynamic space right now.

What are some of the more exciting parts on container application deployments that you’re seeing? I’m guessing most folks are going because they want to have microservices that can scale on their own, and less monolithic infrastructures. You said ‘to laugh at’are there novel uses or particularly excellent uses that you’re seeing?

Yeah, I think customers are definitely looking at containers for those reasons. The microservices architecture is actuallyit’s been around for a little while, and people have tried many different strategies, in terms of how they want to host it and run it. Containers were at the forefront of that. I think some of the stuff that’s driving containers today is less that, oh, we want a microservices architecture, so we’re going to go to a container. It’s that we have a microservices architecture, and we’re looking for a cloud-agnostic or cloud-native way of running it, meaning: we want to be able to run it on Amazon. We want to run it on Azure. We want to run it on Google. We want to run it on bare metal. We want to run it on our own KA platform. Originally, the things that you saw in containers were stateless apps that didn’t store data and didn’t need to have access to any local data. They’d connect to a database that was running somewhere in a nice database cluster, a cache layer or an API, or things of that nature.

What we’re seeing now is: customers like what containers are doing for them and how they can move those workloads around, how they can orchestrate them, and are pushing out further beyond just stateless applications. They’re coming back to us and saying, “I want to run my database in a container. I want to run my queuing application in a container. I want to run Elasticsearch in a container, and a Redis cache in a container,” and things like that, which require persistent storage, which was never part of the original container idea. The rule of thumb was: if it used local files or needed access to the file system, you don’t want to do that.

I think the question is: Is the database a good idea in a container for those reasons?

Yeah, and I think a few years ago, I would have said “absolutely not. That’s not a good idea.” Where things are going today, the answer is: Yeah, that’s worth considering. We could think about that and think about what the best way to deploy it is.

Our customers are saying, “We want to run a database in a container, and we want to move it between clusters. We want to have resiliency.” What’s cool is that what’s happening in the container space right now is that there’s a ton of software and different pieces under rapid development that are enabling that to happen. Containers and serverless now are turning into a stack where you can run every aspect of an application. A few years ago, we would have laughed at you and said, “Don’t try that. That’s really risky. You’re asking for trouble.” Today, we’re like, okay let’s think about that. Let’s see if we can come up with a solution.

Are you seeing that play out very predominantly in Kubernetes? Or are your customers also using some of the other container options?

I would say 99 percent is all Kubernetes. There’s some people running Docker Swarm and a handful of other things from other third parties. I would say predominantly today, everything’s coalescing on some flavor of Kubernetes, whether it’s on top of OpenShift, whether it’s in one of the public clouds like GKE, Azure’s Solution, or Amazon’s, or Rancher, or Pivotal, or things of that nature. Everything is coalescing around some form of a Kubernetes core.

Yeah, makes a lot of sense. That’s a lot of what we’re seeing as well. I had noticed that you came up with a bunch of experience in the financial exchange connectivity area with trading systems. I’m wondering what learnings you might think from there are relevant more widely to enterprise architects, especially in light of edge agendas.

I think that space has changed significantly as well, I would say, in the last ten years and what we see the finance industry doing. Ten or 15 years ago, hedge funds were writing their own custom applications. Everything was proprietary. It was latency-sensitive. Applications were being written in C++ dedicated servers, things of that nature. Some might argue in some ways, they were ahead on the technology curve, especially if they were in the high frequency trading space, and they were really pushing the limits of what networking is capable of and what servers were capable of. You were trying to eke every microsecond. I think that a few things have happened since then that have actually put the finance industry in a position where they’re lagging behind other enterprises.

They developed this mentality of running dedicated servers and having ridiculous requirements for performance and availability, and really stuck to their guns with that, whereas the rest of the enterprise moved on and started containerizing applications, going to microservices, and designing software to tolerate downtime. The idea that everything goes down all the time, so instead of trying to build redundant hardware and build redundant solutions that way, let’s build software that can handle failures.

What we’re seeing now is that a lot of these financial firms are realizing that, hey, we’re actually ten years behind where a lot of the upfront internet companies are today. It’s okay for us to start to adopt those technologies. It’s probably akin to how oftentimes, people feel the banking industry is behind in terms of their technology. They’re still using mainframes. There’s this feeling that we need to rely on really resilient hardware.

Even some of the container conversations we just had arethose conversations are coming from financial firms that are approaching us saying, “We’re starting to containerize our applications. We really like what’s happening, and we want these other features and functionality that we’re used to.” I think from that end, the financial services firms are only starting to get caught up. They’re only starting now to say, “You know what? It’s okay to have an infrastructure that’s in the cloud. That extra ten milliseconds of latency doesn’t matter anymore.” I think those are some of the changes we’re seeing there.

Sure. Do you think there are particular points that they should take advantage of regarding security? We’ve been looking at some of the recent developments. We’re seeing reports of implementation problems causing tens of thousands, if not millions, of exploitable points in the cloud infrastructures in general.

Do you think financial companies are going to be up for that challenge? If so, anything in particular you’re noticing to help them leverage, to do a better job in these implementations?

Yeah, personally, I think the cloud solutions, things like AWS, Azure, and Google, AWS specifically actually makes security easier than not. In a traditional environment, especially if you’re managing the data center, there’s so much more ground-level work that you have to handle, take care of, and focus on. Whereas if you’re jumping into one of the public cloud providers or a managed solution where your provider’s handling a lot of the low-level security pieces for you, it offloads a lot of that.

For example, if we spin up an environment in AWS today, we have audit trails built in automatically. Every single change is documented. Every single connection attempt is documented. There’s an auto log that’s immutable and we ship to a different account. A lot of the basic requirements for running a SOC 2 or ISO standard compliant environment are built in by default, and you can’t avoid them, whereas traditional environments, you’ve got to build all the processes, policies, and procedures around that. In many cases, they’re quite manual. Someone’s got to go and update a log somewhere. There’s a manual change control process in place, whereas that stuff was thought about and built into the public cloud platforms. I think in those ways, it makes it easier.

At the same time, because the public Cloud providers make it easy to deploy resources on the internet live in minutes with a credit card, if you’re not thoughtful in what you’re doing, it’s very easy to put things up that are insecure. We’ve seen that time and time again. I can’t tell you how many articles I’ve read about how somebody put a whole bunch of confidential records on an S3 bucket that wasn’t protected. In some way, I feel like the responsibility lies on the consumer of the cloud service that misconfigured it. Also, I feel like there’s a little bit of responsibility on the cloud providers there. Why are you defaulting to a position of ‘open access by default’ rather than: let’s lock everything down and force the customer to open things up as needed?

I think in the last year or two, we’ve seen that shift. If you provision an S3 bucket now in AWS, they really make you jump through a lot of hoops to make it publicly available. That being said, I think really, the message for financial services is that, hey, if you’re going to start using the cloud, make sure you’ve got a good strategy. Make sure you have a good governance model. Make sure you’ve got some experts in place that are going to be there to audit what you’re doing within the cloud. Go through regular security audits. Just because you’re using the cloud doesn’t mean you get to skip all that stuff. The great part is you can automate a lot of it. You can automate security checks. You can automate security scans. The APIs are great. You can automate the analysis and processing of logs, which were things that were much more difficult to do before.

In this hybrid world that we’re talking about – taking advantage of containers for this as well, are your customers architecting well to make partner API connections as they have increasingly business partners as part of their supply chain?

Yes. Here at ServerCentral Turing Group, we’ve got multiple lines of business. We see the conversation around APIs pretty much across every spectrum of our lines of business. For example, on our software development site, our customers are coming to us and saying, “Hey, help us build an API around our product. Help us build an API around our data so that our customers can integrate with us.” That’s a common thread that we see across the board.

Secondly, the customers that are approaching us for our managed services, or infrastructure services, and our cloud services are all saying, “We need all of those services to have some form of API interaction. We don’t want to manually provision anything. We want to use configuration management tools. We want to allow our applications to provision infrastructure and shut it down.” I think that the API thing is a no-brainer. It’s all over the place. Any new product that we build, any new service that we offer, an API-first approach with it is an absolute requirement. Yeah, that’s a no-brainer.

As the world fans out more widely, are you seeing true edge data centers playing out in your customer strategies? Are people looking for thousands of locations? Or are we really just more in the era of seeing people take advantage of some availability zones and a little bit of geographic diversity?

To be completely frank, I think we’re in an era of a lot of edge hype.

Fair enough.

Yeah, I think when we talk about edge, there’s probably several different dimensions or ways to talk about it. I think some are more interesting than others. Maybe I’ll split those up, and then we can let the conversation follow that. I think the first way to talk about it is getting content closer to the consumer. I think that’s, a lot of times, the conclusion that most people jump to when you talk about edge. Okay, I want to make sure my images and my video are as close as possible to the consumer so that my load times are as minimal as possible. I’m not paying for transit across expensive links.. Yes, customers are concerned about that, but I think it’s a problem solved. I don’t think it’s a problem that will be enhanced in any way by having a data center or something that resembles a micro data center every ten miles.

If my data center is in Chicago and my customers are in Milwaukee, that’s probably just fine. That additional latency to get up to Milwaukee, in most cases, is really not going to be a massive challenge. Where we see that content distribution problem is really more of a regional thing. It’s more, are your customers in the Midwest? Your content should be in the Midwest. Are your customers on the West Coast? Your content should be on the West Coast. Are your customers in the UK? Your content should be in the UK. It’s a regional problem. I think it’s a solved problem today.

There are some other underlying issues, especially when it comes to video, that we’ve gotten really good at, that matter, that are less about how fast is the content available and have more to do with latency. Peering and having quality bandwidth and connections between all the major providers, and being able to provide low-latency paths across backhaul networks is important. We do see quite a bit of that.

On the flip side of that as well – the other thing edge does is, it has this property where it aggregates and de-aggregates traffic, and also provides a caching layer. Another reason to have the CDN is not necessarily to bring your content closer to your consumer. It’s actually to serve the content to the consumer and save your back-end from having to do that. We’d limit the number of database queries we have. We maybe limit the number of times a particular file’s being served from a single location. It relieves a load on your back-end infrastructure. That’s a totally legitimate need. As you build highly scalable distributed systems, edge locations play a key role in doing that. Again, I don’t think it’s like, oh, I need a microdata center every ten miles to achieve that. I think you can achieve that through multiple providers and region-based data centers.

We’re doing some writing at GigaOm and coming to these same conclusions, that delivery has been an ongoing process. As you mentioned, for these types of content, anything, especially a lot of this stuff will buffer anyways. You’re talking video – it will buffer from there.

Then some of the applications that you would most want a near response – say, for example, people talk about self-driving cars. You’re not really going to trust much of that beyond... because your car can’t – if it loses connectivity, it can’t be waiting for that. There’s a whole class of things where, it might be useful, but it won’t be trusted. That has to be on the local device.

I think there will be some room in smart building, manufacturing, farming, and things like that where there might be some interesting edge applications, but I agree with your sense that a lot of it is hype currently.

That’s a great lead-in to my last point about edge, Steve. I think the more interesting thing that’s happening with edge, that still has yet to develop, and I spend a lot of time thinking about – hey, what does the internet look like in a post-Amazon, Azure, and Google heyday? We had this massive evolution of moving to public cloud, infrastructure as a service, automation, and consumption of that stuff. What’s the next thing look like?

I think one of the things, in terms of edge, that people oftentimes don’t think about is: right now, all the conversations are about the content. We’re not actually talking about the computing so much. What we’re starting to see, actually, with the deployment of a lot of IoT devices, whether they’re security cameras, locks, things like Alexa, Google Home, and all these other products – there’s a lot of heavy lifting that has to happen locally on that device for those things to be functional.

Also, there’s a lot more heavy lifting that’s just being moved into those devices because we can. For example, take Amazon’s Cloud Cam. You can set it up to get alerts when a person, a human being, enters the frame. The cameras are smart enough to differentiate between the human body and a dog. All that’s happening locally on the camera, which means there’s a lot of compute power in that camera. There’s a ton of compute power. A Tesla’s a super computer on wheels. There’s probably more compute power in a Tesla than there was in an entire data center in 1992. There’s a ton of compute power in your pocket. I think that when we talk about edge computing, I’m fascinated by the idea of: if we can come up with distributed computing models to leverage the spare cycles that are sitting on all these devices, we’ve been able to shrink CPUs and integrated circuits down to such a small level and put so much computing power in these small devices.

There’s so many of them floating around out there. Wireless technologies are evolving quickly. We’ve got 5G on the horizon. We always think about those technologies, about pushing data down to the customer. How fast can they push it down? What if we reverse that paradigm, and leverage that wireless link to push data to a device for processing purposes or serving purposes? I think there’s some really interesting opportunities and evolution of the Internet as a whole if we can take advantage of all of these supercomputers, effectively, that are sitting out there in people’s pockets, parked in parking lots, and hanging from the side of buildings, and distribute compute capability. I think in the sense of edge computing, in the truest sense of compute and processing rather than content serving, I think there’s an interesting thing that could happen there in the next 10, 15 years.

We might actually see, essentially, a P2P computing edge, hopefully versus a bot-net attack, which is what we see –

That’s right. That’s right. Yeah, if you look at things like Ethereum. Ethereum is making – not the cryptocurrency itself, but the platform for distributed computing – it’s in the very, very early stages of development today. It’s effectively a research project at its best. I think some of the potential for what’s happening there is quite interesting, and maybe it’s not for free. Maybe you lease the spare cycles on your phone back to a major provider, in the same way that, hey, when I have solar cells on my house and I have excess power, I sell it back to the provider. I think there’s some really interesting things that can happen there.

Yeah, I think that is really an interesting horizon. I’m wondering with what we’re seeing offerings from public clouds almost arriving daily, even from just the most major cloud providers, and then of course, there’s more specialized clouds beyond that.

How are you advising your enterprise customers to create good strategy, to maybe not get too caught up with the shiny object, but also to make sure that they’re really taking advantage of the good offerings as they’re coming out, whether that be serverless offerings, or as you mentioned, different ways to store data, these type of things?

Yeah, I hate to sound too consultant-y, but usually, most of our engagements start with: what are you trying to do, and why? Then we’re going to tailor our answer based on what some of the goals and objectives are in what they’re trying to do. There’s definitely cases where, yeah, you should build a serverless application on top of Lambda, and other cases where, no, you should not do that, even though it would work. The drivers for that can be both technical; they can be business. They can be financial.

I think that in each of those cases, we really start from that consultative standpoint of understanding what the customer’s trying to accomplish, what their business is, where they’re heading, and really, what the driver is. A great example of that is: we just recently had a conversation with a major dot com software provider that provides productivity software on the Internet. They had a huge push to move to GKE and move everything over. Now they’re pulling it out for various reasons.

I went in and I asked them, “Why did you guys do that? What was the driver behind it?” (thinking that, oh, they wanted performance, or scalability, or they wanted to save costs – that’s a huge piece oftentimes with many companies. They think, oh, if we move to AWS or Google, we can leverage things more efficiently and save money.) No, the answer was: “We needed to enable more rapid iteration within our development group, improve deployment pipelines,” and things of that nature. It wasn’t about scale. It wasn’t about cost. It wasn’t about anything other than flexibility and leveraging an API-first mentality. They were willing to pay two to three times more for their infrastructure because the payback to release new features and keep their customers happy was well worth it. Really, short answer is, why are you trying to do this? What are you trying to do? Then we’ll look at the field of technologies available and come up with something that fits in that very specific use case that that customer’s going to be coming at us with.

One of the ones that you touched on there is serverless. I think for some of our listeners, they might be struggling with even the basics of why they would consider serverless versus not. Obviously, a lot of the considerations you alluded to would be customer-specific. I’m wondering if there’s a bit of an Occam’s razor you might mention of why you would direct customers to start to look at serverless versus not.

Yeah, I think usually the first one is: if you’ve got short-running processes that don’t require a lot of compute, and you don’t want to set up dedicated resources to run them, is definitely a good scenario there because you’re only paying per millisecond of execute time. This is a small process that runs for about a minute every hour. Back in the old days, you set up a server. You set up a chron job or some type of job scheduling system, you ran it, and that server stayed up and ran the entire time. I think in those types of cases, it’s good.

If you’re dealing with any kind of event handling where the event streams are infrequent and you need to do some data massaging or manipulation of the data, they’re really nice. If you’re building a microservices architecture, a lot of the serverless technologies out there available today are pretty amazing. You can build quite a bit and not have to worry about managing the OS, deployment configurations, and things like that.

If you don’t want to manage an internal dev ops and IT team, there’s reasons to consider it. Yeah, I think if there’s particular proprietary services that one of the cloud service providers is offering and you want to leverage them, AWS specifically has awesome integration with Lambda with almost all of their other services. You can roll out solutions to things rather quickly, whereas before, you had to build an entire server and supporting framework to run an application. Today, you can integrate with an event stream and do all these sorts of things. You can do it in a matter of hours or days, depending on your experience level, and not worry about all the other underlying infrastructure – downside is you get locked into a proprietary platform.

This could be the answer to my next question, which was a little bit about looking at the edge offerings from some of the CSPs as they’re starting to push their stacks down. Your comments about whether enterprises need edge are well taken. Say a company wants a – maybe they’re in manufacturing and they need to run local facilities or something similar. How do you feel about the AWS stack, the Azure stack, Google stacks that are moving down to bring that…?

Yeah, I think it’s an interesting space. I think that some of the concerns, when you lock in on the public cloud side of it is, well, my whole solution is based around Amazon’s API, for example. Yeah, I love it, but I need stuff in my data center locally, or I need it in the factory, or I need it in this building for whatever reason. Amazon recognizes that. All the cloud providers are recognizing it. Outpost is a solution to that.

I think the power of the CSPs is really that they have amazing APIs for their platforms, and that you can get services and infrastructure on demand. Really, the idea that I can continue to do that and have some of it locally for those particular use cases is awesome. I think it’s fantastic. I don’t have to then have a different API, or different interface, or different way to consume infrastructure locally.

I can use that same API that I’m used to in the public cloud that I’ve been using all along, but now I can provision stuff that’s sitting, for example, on the factory floor to run machinery or something like that, where you’ve got connectivity constraints and you can’t afford to be down. I think that it’s a good thing. VMware is doing it. You can get an entire stack from Dell that interfaces with the same way that you would on their public cloud. It just gives more options and more choices, so I think it’s good.

You mentioned VMware. Beyond the big three public cloud providers, how do you feel about some of the specialized clouds that are out there? Packet has some interesting capabilities. DigitalOcean seems to be very developer-focused. There are more beyond [that], whether they be Nvidia or VMware. How do you feel about some of the other clouds?

I think VMware’s still interesting, especially in the enterprise space. There’s still a lot of enterprise companies that leverage VMware third-party proprietary products, and really depend on them. There’s no answer for those things on top of any of the other platforms. As a service provider – we still need to have an answer for that, and there’s still a place for that.

I think from the VMware side, there’s still a lot of play. Also, VMware and Dell – they just acquired Hashi. There’s a lot of space happening on the container side. I think interesting things are going to happen there. There’s definitely a place for it. I think the same goes for Packet and DigitalOcean, but I would say that right now, the other three CSPs – their platforms are so much more mature. For things like Amazon, really what adds the value is the level of integration there is across all the different services that they sell.

Having the ability to send a message to a queue, trigger an auto-scaling event, and run a piece of custom software at the exact same time is pretty amazing. I think there’s a lot of catch-up there. Unless you have a specific need that isn’t answered by those other providers, or isn’t answered by the three main CSPs, but is answered by Packet or DigitalOcean, then they’re worth chatting about. Yeah, it just depends on why you’re looking at them.

Great, Eric. I really appreciate your answering my questions today. Thanks very much for joining us.

Anytime. Great questions, Steve, appreciate the time.

Take care. Thanks for listening, everyone.

Bye-bye.

Interested in sponsoring one of our podcasts? Have a suggestion for a great guest? Please contact us and let us know.