Verbatim: Alban Schmutz, VP Business Development, OVH
By   |  December 27, 2013

On the agenda:
   • The acquisition of Oxalya by OVH, one year later
   • A 5 Tbps network and a million servers capacity
   • OVH vs Amazon Web Services
   • Offering Simulation-as-a-Service…

Interview by Alex Roussel

Alban Schmutz, for the people of the Big Data and HPC community who would not know you yet, what is your profile and what are your responsibilities today at OVH?

Alban Schmutz: I’m in charge of OVH’s business development relationships with our partners, vendors and developers, as well as with institutional organizations and the general public. The most recent illustration of this is the appointment of OVH as copilot with Atos on a cloud computing project for one of the 34 industrial plans announced by the French government.

Historically, I started my career creating an open source software company – Linagora. This name is now well-known and has some authority in the area. I then left Linagora to create Oxalya, which began operations in 2005. Oxalya specializes in HPC on demand with a strong focus on numerical simulation. This is why it was bought by OVH in 2012.

Exactly what made you create Oxalya and how did this lead you to the cloud?

Several customers came to Linagora asking us to develop their HPC infrastructure. The reason was obvious: a computation cluster, it’s Linux in 95% of cases. And when you are a specialist in open source software, people come to you for this type of sensitive matters. At the time, quite frankly, we didn’t know much about supercomputing. But we did it for a first customer, actually a university.

When we got a second request of the same kind, we thought we could probably automate the process. That’s how Oxalya was born: rather than having to do the same thing for each client all over again, why not propose solutions based on a real added value of automation? You know, typically, we’re dealing with scientists doing system administration, whereas their talent is in research. It is much more interesting for them to work on their specialty in physics or chemistry than to manage computer problems. Oxalya’s vocation is to make life easier for supercomputing users by automating the management of their infrastructure.

Were you already thinking of cloud computing in 2005 or were you still concentrating on on-premise projects?

Initially, Oxalya only worked on on-premise projects. We were well aware that people wanted to have their machines locally but, at the same time, we thought that pooling resources made more sense. The use of resources will always fluctuate. Load peaks exist, but overall usage changes over time so that the average usage rates are never optimal. It is therefore more appropriate to share. However, the use of computing clusters outside of the organization, public or private, poses a number of problems: how do I remotely access my resources, view data, work in collaboration with other researchers based on other sites and so on?

This distributed approach, this “HPC cloud” that we initially had in mind – the market did not seem ripe enough to adopt it. But finally everything went very fast. From our first on-premise deployments in 2005, we began to work on issues such as remote visualization and collaboration in 2006, which led us to launch our first “HPC on demand” offer in February 2008…

How did you manage to develop a sustainable business model in a market where initial investment is so high?

We worked with HP to implement our first computing infrastructure. HP helped us with the financing of the equipment. From our side, we had largely financed the development of the solution components. Technically, we had automated management so we could change the machines in the cluster as we went along and isolate groups of machines for particular clients. Public Tier-1 or Tier-2 HPC datacenters have no problem isolating users, especially in terms of security. On a shared hosting, you cannot imagine having GM and Ford on the same machine. So we worked hard on these problems in our software stack and, as early as 2008, we were ready to go.

At the time, cloud computing was not what it is today, especially in supercomputing. How did you launch your offer?

With a campaign of free cluster hours for SMEs, under certain conditions and for a limited number of users. SMEs typically have budget constraints and skills problems, so the targeting was relevant. This operation was not comparable to free hosting: we really aimed at helping companies to do intensive calculations and we wanted to help promoting their innovation projects. We therefore set quotas and gave millions of hours to the projects that met the criteria.

Was it successful?

Actually… yes and no. Yes in the sense that all our resources were quickly exploited in full. By paying clients – generally large accounts familiar with HPC who only needed machine time – and by the free program clients. So yes, overall, it was successful. Meanwhile, we learned a lot about HPC access for SMEs. A year later, we compared the applications we had received and their effective use… and we saw there was a big difference. Among the companies that did not use the platform, in approximately 25% of the cases, the person making the request was gone. Another quarter of the companies who had asked for it did so because it was free, but had no real computing need.

It was all the more surprising as serious applications files had to be completed and approved by a technical committee. We also noticed that some of the requests were made by students… But the really interesting point is that the SMEs who truly used our resources were already convinced of the strategic aspect of supercomputing. Clearly, they wanted to be accompanied by an external service provider to save time and be more efficient.

Navigation

<123>

© HPC Today 2024 - All rights reserved.

Thank you for reading HPC Today.

Express poll

Do you use multi-screen
visualization technologies?

Industry news

Brands / Products index