Tech Trailblazers Showcase: MemVerge flash talk



For the first year ever, the 2020 Tech Trailblazers Awards teamed up with the London Enterprise Tech Meetup to host an event where some of the top of the crop of entrants could showcase their award-winning businesses. One of those who gave a flash talk at the event was Charles Fan the CEO & Co-Founder of MemVerge, a firm which won both the AI and Big Data Awards and was a runner-up for the FinTech category.

Charles gives an introduction to the firm and provides a quick explanation of how MemVerge is aiming to revolutionize persistent memory by putting a virtualisation layer on top of Intel’s Optane products, providing cheaper, bigger, and almost-as-fast memory as DRAM.

The host is Ian Ellis for London Tech Enterprise Meetup and the session is moderated by our very own Chief Trailblazer, Rose Ross. Also asking a question is Dr Jacqui Taylor the Founder & CEO of FlyingBinary.

YouTube:

Also available as audio only on:

Interview transcript

Ian Ellis: Well move on quickly now to Charles whos our next presenter from MemVerge.

Charles Fan: Good evening everyone in London, I’m based in California so it’s morning-time for me. Great to meet you here, I hope you can see my slides. I’ll give you a five-minute introduction of MemVerge and the Big Memory Software that we produce.

So, what we’re working on is an architectural shift in the computing model where traditionally there are two buckets of data, there’s memory, DRAM typically, and storage. DRAM is fast but it is expensive, it is small, and it is volatile, meaning if you turn off the machine you lose the data on DRAM.

Storage is persistent, it can persist data for the long-term, it’s cheaper, it’s bigger, but it’s much slower, roughly 1,000 times slower than DRAM. So, because they are good at something and not so good in other things, you can now place activate data in memory and it cannot fit or if you want to save it for the long-term, you move it to storage, and data is being moved back and forth between those buckets, and that is storage IO. As the world is moving more and more to data-centric applications, especially realtime data-centric applications, this IO data movement between these two buckets can often become the bottleneck of an application, and this problem will only get worse as we move forward with more and more data coming in at faster and faster speeds.

Now, the good news is that there’s a new type of memory media that’s called persistent memory or non-volatile memory that just came to market. The first vendor is Intel and other vendors will follow. The Intel product has got Optane persistent memory and the technology is shared with Micron and there are some other memory makers are looking to make it in the next couple of years as well. This new memory is bigger, it’s cheaper, and it is persistent, and it is almost as fast as DRAM. So now it gives room for our type of software that can virtualise this heterogenous pool of memory DRAM and Optane to together accessing them directly in a bi-addressable way, we can deliver a Big Memory tier that is fast, that is big, that is cheap, that is non-volatile and highly available. So what this will make possible is to eliminate the storage aisle and allow your data-centric applications to live entirely in memory.

The product we introduced is called Memory Machine. This is basically delivering this memory virtualisation and providing DRAM-compatible, software-defined memory to the applications without requiring application change. We are a user space software that runs on Linux and we intercept all the memory-related functional calls, and we deliver memory that are higher performers, that are lower cost, that are bigger, and that can also be persistent on demand. We invented a new mechanism that’s called ZeroIO in-memory snapshot. It is similar to a checkpoint, they capture the entire application state, except it doesn’t need to move it to storage but rather persisting it in place in memory, and this allows you to create features such as autosave, such as thin clones. So essentially it can allow you to roll back your application to any point in the past, either to increase productivity, or providing higher availability, or even for security reasons to provide forensics for a particular stage of your application in the past time.

So, it’s providing all these capabilities that before was not possible.

So, with this technology we can imagine a new world of Big Memory in the next 10 years, and this is going to sweep across all data centres, both the cloud data centres as well as on-prem data centres. For the data-centric applications, all active data can be served out of Big Memory, rather than today between memory and performance tier of storage. The storage market will continue to exist, but it will be relegated to supply the capacity for secondary archival and backup purposes. By doing this, I think we foresee unprecedented productivity for data centric applications, and a lot of things that today regarded impossible will become possible.

So that’s a quick five-minute rundown of what we do, I’m happy to answer any questions that you may have.

Ian Ellis: Great, thanks. Jacqui it looks like youve popped onto the screen there, do you want to go first?

Jacqui Taylor: Yes sure. Hi Charles, that’s great, really, really interesting in what you’ve put together. Can I ask whether you’ve done anything about the memory leakage that can happen when data is at rest? Whether what you’ve got here is solving some of that problem? I’m thinking about it from a security point of view.

Charles Fan: Yes, so what we do by delivering this software-defined memory tier, is we will provide an administration portal into your memory, and we will show you through our GUI or through our API, you can see all the applications that are going on in the memory at any time in the past. You can have better isolation between the application internal memory usage, you can monitor all the memory usage, and you can track where the memory goes. So we believe this can enable capabilities that can help manage memory leaks more effectively as well.

Jacqui Taylor: And can you set any tolerances on that, so more than this I want to investigate and set alerts?

Charles Fan: Yes, you can. So there are alert capabilities that you can configure and that can be sent to the admins.

Jacqui Taylor: Cool, thank you.

Ian Ellis: I’ve one question here from Hannah asking, is this similar to Intel 3D Memory?

Charles Fan: Yes. So, we are providing a software layer that runs on top of various hardware, and Intel 3D Xpoint, or Optane persistent memory is the marketing name for the 3D XPoint technology, is enabling hardware units. So Intel is our biggest partner, and our software runs on top of Intel memory, together with DRAM and we sort of virtualise them and create a hybrid layer of software defined memory, delivering to the applications. The benefit is it’s much bigger for today’s 9 terabytes per 2-socket server. It is about half the price, the retail 3D XPoint is about half the price per gigabytes as DRAM, and you can save cost by 30 to 40 percent by mixing DRAM and PMEM together. So our software innovation, we can deliver this at the DRAM speed. So essentially this is taking the Intel 3D XPoint memory and making the DRAM fast and continue to use the persistence and lower cost capabilities of Optane memory.

Ian Ellis: And then Charles Clark follows on from that question asking about HP Memristors and the similarity there.

Charles Fan: HP Memristors was one of the earlier experimentations on the storage class memory or persistent memory. Unfortunately, it is not yet commercially available, so it’s not on the market yet but when and if it does, we will support it. Essentially our view is the memory world will become more heterogeneous versus the last 50 years as DRAM, one size fits all. In the next 10 years I think its going to gradually become more heterogeneous, and our software is designed to manage that heterogeneity of that new world.

Ian Ellis: Cool. Ive got a couple of questions coming in now. Ollie asks how do you intercept memory calls without a huge performance hit?

Charles Fan: We are only intercepting memory allocation calls, memory-free calls, so you can see the control path of the memory management. We actually do not stay in the data path for the actual low store operations to the memory. So, for the normal memory access we add no overhead to it.

Ian Ellis: Great, and then can you talk a bit about your go-to-market, which OEMs are you going to partner with for initial commercially-available products?

Charles Fan: Our product is a software-only product, it’s actually by annual subscription that you can buy it, and it can run on any servers you have, or your favourite servers: HPE, Dell, Lenovo, Supermicro, you name it, that has your servers and our software on the top of it, as long as the server supports Optane, the new Intel memory. And we do have a growing set of OEM partners as well, so the first one we announced is with Penguin Computing, that if you buy the live data solution from Penguin Computing, our software is built into that, that enables the Big Memory service for that server. More will be upcoming, but even without that it will run any of the servers you have.

Ian Ellis: Great. Charles that was super-interesting, thank you for presenting.

Charles Fan: Thank you.