Translattice Shakes Up Distributed Computing
One of the basic assumptions about cloud computing is that service outages are bad. An application that goes down for one reason or another is an expensive problem when it happens. But it’s also an expensive problem for which to plan ahead, usually involving buying a lot of redundant hardware and software that kicks in when the primary systems fail. It’s not an attractive notion, but then again neither is downtime.
Most of the time, database applications run in one central location. Sometimes there are legal requirements about maintaining data within national borders, or corporate policies about keeping data on company-owned hardware. The reasons can vary. Organizations have put a lot of attention on fault-tolerant hardware, redundant network connections, and recovery processes. But applications themselves get short shrift.
A new company called Translattice, backed by $9.5 million in funding from DCM, an early-stage venture capital firm, aims to change that with a new architecture that distributes applications. Make your application resilient, the thinking goes, and you needn’t spend quite so much on fault-tolerant hardware and extra network connections that will otherwise sit idle until they’re needed.
I talked last week with Translattice CEO Frank Huerta and Michael Lyle, its chief technical officer, about the company’s new architecture and its plans to shake things up in cloud computing.
AllThingsD: Frank, when you think of cloud computing and data centers, you tend to think that there’s already a lot of redundancy built into the infrastructure, and yet there are still lots of outages. What’s going on?
Huerta: One of the main problems we’re addressing is the complexities in the infrastructre. United Airlines went down recently, and USAir. You’re continuing to see more and more problems in the infrastructure. And the reason for that is that it’s starting to hit the wall in terms of what it can deliver.
So what does Translattice do to solve that?
Huerta: Translattice is about the deployment of enterprise class applications, like CRM and ERP applications in globally distributed environments, including the cloud. Everything else to this point has been monolithic. This is a different paradigm, and we think it opens up a lot of other advantages. We’ve built this platform for cloud and traditional applications. The components are all identical and all aware of each other so the system is aware of where the data is at all times. And by policy you can control where it is and how much redundancy you want. But they all work like they’re operating from one central database, when in fact they’re distributed around wherever you have a presence.
So how do you do it?
Huerta: One thing is that we’ve solved the distributed relational database problem. This was an unsolved problem in IT for the past 25 years, so it’s a major technical accomplishment. We’ve taken all the key components in the data center — the storage, the database, the app server, load balancing — and we’ve built it into a machine we call a Translattice Node. And that Node is a rack mountable box with commodity hardware inside, and it can be run as a physical appliance, or it can be run as a virtual instance in the cloud like on Amazon. And this is the platform on which you run your applications.
How is it different from the traditional set-up?
Huerta: When you turn it on you get this re-mapping of what you can do with your applications. If you need additional computing resources in a certain location, you just add boxes there. The infrastructure nodes now share information amongst each other. Your performance is better, because we move data closer to where you’re going to be using it. If you move from New York to Germany, the system automatically sees where you’re logging in from and moves the data you use closer to you by moving data accordingly, so you get local performance. In many ways it’s like what Akamai has done with Web content. They cache Web information so that when you visit a Web site you get served with a cache from a location that’s closer to you. But this is a generation more advanced. We do the same thing but with dynamic application data in real time. You also get better control of the data and can control where it can and can’t go by policy. And then you get much better resilience. You can set policies concerning how much resilience you want in the system by saying how much you want your data copied and whether or not you want it replicated on multiple continents.
What kind of customers do you have?
Huerta: We have a few beta customers and we’re just in the process of getting our first paying customer, which we can’t announce just yet, and we’re setting up pilots with large financial companies and with governments. Financials and governments seem to be early adopters of this kind of technology because they can’t afford for things to go down.
We’ve seen how the federal government in the U.S. plans on cutting back the number of data centers it operates, and that it’s turning more to the cloud to save on operational costs. Is this likely to fit into that strategy?
Huerta: This would be one way for the government to make its infrastructure more efficient, sure. And certainly as it moves more stuff to the cloud, this is a strong platform for running legacy applications on the cloud, and yet still keep it within their own infrastructure.
Yet you’re distributing the data, and that idea is sometimes anathema to financials and governments who are usually the biggest sticklers when it comes to moving data across national boundaries. How do you get around that?
Lyle: We’re working with financial firms that have been forced to deploy five copies of their banking systems around the world both for performance because you need the data close to where its being worked on, but also because they’re not allowed to have customer data cross national boundaries so that means they don’t have the minute-by-minute view of the business, and they have to run a big settlement process at the end of the day. They end up not being able to offer the same products to customers in every country. And just running five copies of all that infrastructure is expensive. Our ability to de-centralize the system, and keep it as one big cohesive application processing platform while at the same time complying with all the business rules about where data is stored, really could revolutionize the way that banks are doing business.