What does it take to build Europe’s largest and most sustainable data-centre campus, from an empty plot of land to a 1.2-gigawatt giant of AI? How do you future-proof a facility when chip technology is evolving at breakneck speed? And what happens when the site of former coal-fired power plant becomes a global hub for AI?
In this special, on-location episode of Cleaning Up, Michael Liebreich visits Sines, Portugal, where Start Campus is transforming the site of a decommissioned coal plant into a next-generation data-centre campus that once finished will be Europe’s largest data centre.
CEO Robert Dunn takes us inside the first operational building, currently 29MW but just 2.5% of what’s to come, to explore the engineering, economics, and vision behind a €10 billion physical infrastructure build that will eventually house an additional €40 billion in incoming IT hardware.
From earthquake-proof structures to seawater cooling and uninterruptible power supply systems, Rob breaks down what it means to design for 99.99999% uptime in an AI-driven world. Michael and Rob also dive into the reality and hype surrounding AI: the surge in GPU-hungry AI training, the race to build at gigawatt scale, the challenges of financing these mega-projects, and the balancing act between speed, cost, sustainability, and long-term viability.
Set against the backdrop of Microsoft’s freshly announced $10 billion investment in the Sines campus, this episode illuminates how the data-centre industry is reshaping global energy systems, local communities, and the future of compute.
Leadership Circle:
Cleaning Up is supported by the Leadership Circle, and its founding members: Actis, Alcazar Energy, Davidson Kempner, EcoPragma Capital, EDP of Portugal, Eurelectric, the Gilardini Foundation, KKR, National Grid, Octopus Energy, Quadrature Climate Foundation, SDCL and Wärtsilä. For more information on the Leadership Circle, please visit https://www.cleaningup.live.
Discover more:
Michael Liebreich
When it reaches 1.2 gigawatts, when it's fully built out as a campus, how much will you have invested in the site, and how much will then the customers have invested in? What will be the total envelope of the money?
Robert Dunn
So a normal metric that we generally work to is around €10 million per megawatt. Now, because we're building at such a large scale, we should be able to develop slightly less than €10 million a megawatt. So our latest estimates are coming in at about €10 billion euros just for the physical infrastructure of the data center. That's over the 1.2 gigawatts. And then we expect our customers to bring in their GPUs, their CPUs, their IT equipment. That's generally around four times as much. So we're looking at €40 billion of IT equipment going in to add on to the €10 billion that we're spending.
ML
Hello, I'm Michael Liebreich, and this is Cleaning Up, and today we're bringing you the second of our On Location episodes from Sines in Portugal. I'm here as the guest of Start Campus, a portfolio company of one of our Leadership Circle members, Davidson Kempner, and they are building a huge data center right here. The first phase is up and running, that's just 29 megawatts. But ultimately, this location will be hosting 1.2 gigawatts of data center capacity. And it's a really interesting location, because this used to be a coal-fired power station, and so we're going from coal to AI. I have got lots of questions. What does it take to build a facility like this? How do you make sure it's a good data center, with all these projects proliferating around the world? What is the business model of the data center sector? How do you make sure that something like this plays nicely with the locals and also with the grid? I'm going to be putting those questions and more to Rob Dunn, the CEO of Start Campus. Please welcome Rob Dunn to Cleaning Up.
ML
Before we get started, since recording in Portugal in October, Microsoft has announced that it will be making a $10 billion investment in the StartCampus Sines data centre. That’s a real vote of confidence in the work that Rob and his team have been doing, as well as in the foresight of his backers.
ML
Rob, thank you very much for not just joining us here on Cleaning Up, but also showing us around your baby.
RD
Well, thank you so much for having me. It's great to be here.
ML
Let's start as we always do. Can you explain, in your own words, who are you and what is it exactly that you're doing here?
RD
Okay, so I'm Rob Dunn. I'm the CEO of Start Campus. We're here in Portugal, building Europe's largest and most sustainable data center. So we're sitting today in the battery room of our first data center, the first of six large data centers in a town called Sines in Portugal. And our ambition is to grow massive scale data centers in a very sustainable way to support our customer base. And I'm sure you're hearing a lot about the AI and other workloads that are growing that huge demand at the moment. We're here to support those customers for the long term.
ML
You said we're here in a battery room of a data center, and I have to say that every so often the fans go on and go off because you are already operating here. So you've got some operations, we'll hear exactly what you've got so far. But if the fans go on, that's presumably somebody just using it, making a cat video or doing something, we get the fan noise, right?
RD
No, exactly. It's very important. We're sitting in a battery room. These batteries, to ensure longevity, need to remain at 22 to 24 degrees Celsius. So those air fans you hear at the moment are there to support that. But yeah, we've been operational since about September last year. We started building this data center in 2022. The first data center was designed originally to be 14 megawatts, and now, due to over-demand by customers and the increasing capacity that NVIDIA is pushing out every year, we've managed to double that capacity. So what you've seen today is a new phase of construction. This data center is not only operational, but it's also being expanded at the same time.
ML
But when you say this data center, the campus, I think it's really an astonishing thing for me. The building we're in is 14 growing to 29 megawatts, and it's immense. We've just been walking around looking at it. But it's also only 2.5% of what will eventually be on the campus. So the campus is going to be 1.2 gigawatts.
RD
Yeah, so this might look like a big building. This is our baby in the big scheme of things. The entire campus, once it's built out, will be 1.2 gigawatts of IT load, making it, as I said before, the biggest data center in Europe. And I can say that with confidence, knowing it's the only one that's got more than a gigawatt of power capacity delivered by the grid.
ML
So I've got a bunch of questions that I want to run through in terms of why data centers? How does the business model work? What makes a good one? And how do the locals react? And so on. But before we do let's just to give a bit of context, I actually was involved in this project in a funny way, before you were, before you joined to run it. And that was back in 2021 when the investors, so that's our Cleaning Up Leadership Circle member, Davidson Kempner, and also a company, an investor called Pioneer Point, they wanted to raise the profile and float the idea that the future of data centers were not in in Europe, was not Frankfurt, London, Paris, Dublin, Amsterdam, but actually would be outside towns, and they'd be absolutely enormous. And kind of nobody was engaging with this idea, it seems extraordinary today, but people thought that data centers needed to be in cities because of latency. And so I edited and helped to write a report about what we dubbed Green Giants, Green Giant Data Centers, and we launched it at the Glasgow COP in November, 2021. And it was a sort of a desperate attempt to get people to understand that data centers needed to be really big and really clean, really renewable, and use lots of existing assets and so on. And I think it kind of did its job, because here we are.
RD
Well, I think now it's proving to be very true, isn't it? So I first read that report in 2021. I met the guys from Davidson Kempner. I was in a pretty comfortable job. I spent 10 years at a company called Digital Realty building data centers all around Europe, in those cities, in those markets that everybody knows about, the cloud markets. They call it the FLAPD, which is Frankfurt, London, Amsterdam, Paris and Dublin. This is where, traditionally, all the data centers were being built in Europe. And a lot of those were, 40 to 50, maybe even 100 megawatts at the time, that was a bit of a stretch. When I read, when I heard about this project from Davidson Kempner, and then they introduced me to the Green Giants paper, I could see that there was a huge amount of ambition there. And it made sense. It made sense that these things were going to grow, and it made sense, if we could, to grow them in a smaller number of locations where you could get all the great attributes of renewable, low-cost power, great connectivity, and potentially even amazing cooling solutions as well.
ML
And I think that's it's important to remember, because it's so easy to index on today. This was before ChatGPT. It was before the AI boom — or bubble — depending on how you want to see it, but before the… I mean, you know, there's clearly elements of both. But it was before all of that, which is why we didn't call it core AI data. So we call it these Green Giants, but we spent a lot of time on how these huge data centers would play nicely with the grid, and with local communities. So I want to get onto that. But first, let's just sort of dive into what it takes to build a 29 megawatt data center, but really that's only the pilot for this gigawatt scale data center. What do you actually have to build? When you arrive, there's nothing. What do you do? You are the guy who does this. So what do you first do, exactly?
RD
It takes a lot of planning and a lot of design work, a lot of permitting. You spend two, three years planning these projects before you even break ground. And then once you do, you've got to make sure you have everything lined up so that they can go at speed. And it's across the board. You need to make sure the structure is well designed to support the growing data center loads that we have walked around today. I showed you the floor on here that supports a two to three ton rack, and we could have hundreds of those in a small location. So it needs to be very well designed.
ML
So when you said loads there, you mean literally, physically the weight.
RD
The actual weight of these racks. They're very heavy things, and they need to be rolled around.
ML
How big is your campus area, and how big is this building? Just in terms of square meters, or acres or something?
RD
So around 64 hectares of land here, and the first building is on about a 10 hectare plot.
ML
Now, if it's going to be 2.5% percent of the compute or of the megawatts, or the gigawatts, but it's already 10 hectares out of 64. Is that because there's just so much you've got to build roads, and you've got to build all the infrastructure?
RD
Exactly, there's a lot of common infrastructure that gets put in the first building. But you'll also note that this first building does take up a decent footprint. And we've got a lot of flexibility built into it, because it's our first building, and that's actually paid dividends to us. We've been able to double the capacity from the original design, and we'll probably end up at more capacity as we go. The future buildings are very efficiently designed. They're two stories as well. So we're trying to get all of the data space on a single floor, and then all of the electrical infrastructure up on the first floor, away from the IT space. So it just maximizes the use of that plot.
ML
Okay, but back to the building. So it's got to sustain the loads, the actual physical weight of not just the racks, but you've got a lot of the cooling equipment. And also, this is pretty heavy engineering. There was a big earthquake that destroyed Lisbon in 17 something, something. There's actually a museum, if anybody who listens to this comes to Lisbon, there is a museum where you can sit in a chair and it'll shake you around and frighten the whatsits out of you. Probably in your role, it would frighten you rather more than most people. Are you not worried about earthquakes?
RD
A little bit, but I've built in a number of zones that have minor earthquakes from time to time. And we designed the data centers in all of those areas to withstand those tremors. You can see as you walk around, the strong supports that we put in place so that all the pipe work, all the electrical equipment, all the IT equipment is well supported. So things aren't loose and they fall around.
ML
So if there is an earthquake during the filming of this episode, we will just continue. We'll just wait…
RD
No break, just continue.
ML
No break, and everything would be all the little blinking lights, everything would be fine, right?
RD
Exactly.
ML
So let's just put this in context, though. The reason you want this thing to just keep operating, irrespective of anything, of course, for resilience of what's going on in the servers, but also it's a very expensive facility you're building. Can you just tell us how much? Because you are building the building, and the cooling, and some of the electrical infrastructure, which we'll get into, and then your customers are filling it with the GPUs, with the chips, with all the equipment. When it reaches 1.2 gigawatts, when it's fully built out as a campus, how much will you have invested in the site, and how much will then the customers have invested in? What will be the total envelope of the money?
RD
So a normal metric that we generally work to is around €10 million per megawatt. Now, because we're building at such a large scale, we should be able to develop it at slightly lower than €10 million a megawatt. So our latest estimates are coming in at about €10 billion just for the physical infrastructure of the data center. That's over the 1.2 gigawatts. And then we expect our customers to bring in their GPUs, their CPUs, their IT equipment. That's generally around four times as much. So we're looking at €40 billion of IT equipment going in to add on to the €10 billion that we're spending.
ML
I was just doing mental arithmetic. So you do €10 billion. So that's whatever it is, $12 billion. And then you multiply that by five to get the total. So it's a $60 billion facility. When will that be done? What are your phases that you're going through? Because obviously, this is 29 megawatts. This is the first 2.5% of that, and it's already operating in part. When does this building get finished? And then you've got, presumably four or five more, they get bigger and bigger, I'm gonna guess?
RD
Exactly. So the upgrade of this building will be finished in the early part of next year. Call it March, April. But at the same time, we're starting the next building. So we'll be breaking ground on the next building in the next few months, and customer demand willing, we should have that finished by 2027, and rolling into the following buildings as we're still building out the next one. So we're aiming for about a six to nine month stagger, start to start, which allows us to finish all five buildings by 2030.
ML
2030, so another five years. And you broke ground when? Because you joined just before the groundbreaking, I believe?
RD
Exactly, so almost the day I joined, we were breaking ground on this building, and that was in early 2022.
ML
And the prep work, the permitting and so on, that had already happened? How much of the detailed engineering design had already happened?
RD
For the first building, the detailed engineering was already there and it was ready to start building. So the contractors were already on board. The master planning was then done for the rest of the campus.
ML
So it's eight years to get to 1.2 gigawatts…
RD
Effectively.
ML
… to go through the staggered stages and so on. Now let's dive into just a few of the other systems that you're putting in. Because you're doing a lot of electrical engineering. There have been a lot of electricians involved here, I can just tell that from the walkthrough. But also you've got some innovative approaches to cooling, which are ultimately going to make this a very good and very low cost data center, right?
RD
Exactly.
ML
Okay, Rob, there's pipes everywhere. Where are we? What's happening here?
RD
Okay, so this is our mechanical plant room. So this is where all the cooling happens for the data center. We have a combination of technologies in here. The more traditional are chillers and outside the cooling towers.
ML
The chillers are over there behind those boards?
RD
Over there, we’re protecting them while we’re under construction. So these are more traditional, been in data centers for almost 20 years, and they'll provide the heat rejection for normal air cooling like any old data center will have. We've only brought that in as a backup for our first building, just while we were testing the sea-water cooling technology. So our primary and future-facing technology is sea water cooling. What you can see behind me here is a bunch of heat exchangers that are titanium plated, that allow the sea water to pass through and then reject out to the ocean. And on the process-cooling water side, they then allow us to chill down the air that gets pumped into the data center. And it's doing that at about a 1.1 PUE, which basically means it's about three to four times more efficient than these old chiller cooling towers that we have.
ML
PUE, acronyms…
RD
Power usage effectiveness.
ML
So that's the ratio of the power that comes into the building, that actually goes into the chips, versus being used to power all of this.
RD
Exactly. So for a megawatt of IT load, if we use 100 kilowatts to cool it down, that's a 1.1 PUE. So cooling is a big part of the infrastructure we need to put in to make sure those racks can operate no matter what happens. Traditionally, it would be air cooling at the rack level that would provide the support to keep those racks at 24 to 26°C. And then you need to have a heat rejection mechanism. Traditionally, that would be a combination of chillers and cooling towers, and that can be quite expensive, not only in terms of the installation cost, but also the operational cost. Now we have some of that infrastructure here, but we call that our backup system. Our primary system for heat-rejection is a sea water cooling system. That's been used in data centers all around the world for many, many years. I've installed that in London. It was more from the dock than from the sea. There are similar examples in Marseille and Finland of this being used. It's a very effective way of cooling the data center. You can basically keep the rack cool by using three times less power to do so.
ML
So you have got here, though, in this building, you've got a duplicate system? You've got the air chillers, and you've got the sea-water cooling. And each of them could completely deal with the heat load of this building, right?
RD
Effectively, yes, we did that just to make sure that everything operational will be working with the sea water before we decide eventually to go 100% on sea water for this building and for the rest of the campus.
ML
So ultimately, this one's got duplicate air and sea water. The later ones will have duplicate sea water and sea water, because you've always got to have duplicate everything.
RD
Always n+1, yeah.
ML
So n+1, that doesn't quite qualify as an acronym. I've warned you that acronyms always have to be explained here. But what is n+1?
RD
For any system it means that one part of the system can go down, or you could be maintaining one part of the system and still maintain 100% of the load to the rack.
ML
And only n+1, you're not doing n+2, where two things could fail simultaneously?
RD
There are some scenarios where you might have 14 fan-wall units that do the cooling at the rack level. And you might, because there were so many of them, you might put in n+2 to make sure that you have… Our job is to make sure we have 99.999% of resiliency.
ML
Three nines?
RD
Well, five, basically. So you do some calculations to see whether you need to be n+1 one n+2 for any bit of equipment.
ML
And I guess once you've got multiple buildings, you can also do it through moving things between buildings, depending on which clients you've got and so on?
RD
When we talk about the cooling for the entire campus, that will be a huge shared infrastructure. We'll have a large heat exchanger building, which allows us to pass the sea water through these titanium-plated heat exchangers and then back to the sea. That'll be a shared infrastructure we use for the entire campus again, allowing us to not only operate more effectively, but also keep the cost down.
ML
Let's go a bit deeper on cooling, because there's a shift going on from the earlier versions of these GPUs, which could be air-cooled. Because they were generating a lot at the time. You probably thought it was a huge amount of heat, we're talking about 3-5 years ago. Now they're getting so powerful that you're starting to have to cool them with liquid to the chip. And the liquid is water to the back of the rack.
RD
Exactly.
ML
So you're simultaneously making that shift from air cooling to the chip to water cooling to the chip?
RD
Right, exactly. So at the IT level, we need to, as I said, maintain the temperature of the chips. And you can do that through two methods, blowing cold air through the rack. And that can maintain the temperature, if the rack isn't too power hungry. That works up to 20-30 kilowatt racks, no problem. But once they start to get up to sort of 50-60 kilowatts of rack, you need to find some supplementary cooling.
ML
And 20-30, kilowatts. I'm just remembering, when I was a kid, we had an electric heater, actually from before we had central heating. And you have these sort of filaments that would glow red and warm the room. And each filament was one kilowatt. So you've got a rack. These are batteries, but a rack which sort of looks a bit similar in size. And you're saying that it was sort of 20-30, up to 60 kilowatts. So like 60, 1-bar electric stoves, those are now considered kind of little and pathetic and old, right?
RD
Exactly. So you can imagine the next generation of racks that we'll be installing now for our customers. 130 kilowatts of rack. So imagine 130 of those red filaments in one place. It's pretty hard to keep them cool.
ML
And what about the next generation? How do you future proof? I mean, it's not going to stop, right?
RD
No, so we build flexibility into the building itself. We're not here to just supply power and cooling for the next generation of chips. These buildings are designed for 15 to 20 years, maybe even 25. But we fully expect that in 5 to 10 years time, the racks will go from 100 kilowatts of rack to a megawatt, or maybe even more. So the amount of space we need to actually install those racks will go down and down and down. But the amount of space that we need to provide the electrical equipment to support the racks will increase. So we're just repurposing the space so that we can hopefully increase the capacity of each of the buildings.
ML
So in the end, you'll end up with a quantum computer surrounded by an absolutely colossal amount of cooling and cabling as well. We haven't talked about that exactly, so the electrical connections. Let's go a little deeper on that. As we walked here, we saw rooms full of spaghetti. And when I say spaghetti, I mean we're talking about cables like that.
ML
Rob, where are we? What have we got here?
RD
Okay, so now we're in the electrical switch rooms. This is where all the power comes into the building, and we provide the resilient power to the data hall space. So we bring the power in at 11 kilovolts, which then comes into a ring man unit.
ML
Is that the big cable up at the back?
RD
That's the big red cable there. It comes into a ring main unit and a transformer, and then it transforms down to 400 volts, and then that supports the data hall itself. So you've got the switchboards here. They send the power via a buzz bar to the data halls. We're running, I think, two megawatt buzz bars here. So each of these switchboards provides two megawatts of power to the data hall, and we have UPS (uninterruptible power supply) here. So if there's ever a power outage at the grid level, there's no break at the IT level. So the UPS picks up the load, and they're just there to make sure that the generators can then pick up the load to sustain UPS cover.
ML
How long does that last for?
RD
So we say end of life batteries will hold for five minutes. But generally it's between seven and 10 minutes, which is more than enough time for the generators to kick in.
ML
So those are the battery rooms there.
RD
Exactly.
ML
So that's a UPS control, and then you've got the batteries.
RD
Exactly. We've got the lithium ion batteries sitting in there, behind those doors and they support these UPS blocks here.
ML
Would this place have been full? It must have been full of electricians at some point. I mean, every single cable had better be right.
RD
Exactly. If the data center took us 18 months to build, nine months of the hard work goes in here, making sure these cables are perfectly terminated and tested before we go operational.
ML
So Rob, this is the kind of the spaghetti room, where you're actually still wiring it all up. Let's have a look.
RD
So this is the work in progress.
ML
And that is a lot of copper.
RD
It's a lot of copper.
ML
That's a lot of copper, that's incredible, look at that, although I wouldn't want to touch it while it was live.
RD
Please stay away from the live panels.
ML
Yeah, absolutely, I'm not gonna. Good. Amazing.
ML
Let's take it from the substation, because we'll talk about where the electricity comes from. But from the substation, how do you get that much power? It's going to be 1.2 gigawatts, and then it has to be broken down all the way down to the GPUs. How do you do that?
RD
So we're building our own large substation for the entire campus. This first building has a smaller substation, which is fully built. But if we start at the larger substation, we're taking 400 kilovolt power from the national grid.
ML
So that's the power that will come on the really big cables?
RD
Exactly, you can see them as you're driving down the highway. So we'll connect that into our own substation. Then we need to step that down to the next reasonable kilovoltage stage, which is, in this instance, 150 kilovolts. And then we'll take that, distribute it to each of the buildings. So they'll have five different buildings that each have their own substation. And then we can step down again to 22 KV. And then you distribute the power into the buildings from there, and once it's inside the building, then we have to make sure that it's fully resilient. So you not only have the power connection, but you make sure that you, in parallel, have UPS and you have generators.
ML
UPS, Acronym…
RD
Uninterruptible power supply.
ML
And that's what this is. This is the battery room.
RD
It supports the UPS. So we have UPS in this building, initially 2MW per block. We’ll step up to 2.5MW per block. But in this room, you're seeing 2MW worth of batteries that can keep the customer's load running for up to 10 minutes, but at the end of life, five minutes. Now that's only there so the generators can start running, and the generators will pick up the load and then we’ll transfer onto generators.
ML
So if there's a problem, the first thing is immediately over to the batteries, and then you've got five, or initially 10, but as the battery life degrades, five minutes to get the generators running.
RD
Exactly.
ML
Okay, just before we go to the generators, the chips run on 400 volts, AC or DC?
RD
AC.
ML
Okay, interesting. So that's what you're doing within this building. You're going from, what did you say, 22 KV, down to 400?
RD
Yeah
ML
And transformers, or solid state chips to do all of that? How is that done?
RD
So it's done with transformers, traditional transformers.
ML
Traditional transformers… Because power electronics is getting very clever these days.
RD
Yeah, exactly. And we expect that moving forward in a few years time, when we're talking about 1MW racks and 2MW racks, we will be looking at DC. We will be talking about potentially powering them at an MV level, not at an LV level, so medium voltage instead of low voltage. That's coming, and that'll be a better use of CapEx, that'll be a better use of space in the future.
ML
And what temperature do the racks and the chips operate at? You're keeping it at what?
RD
Historically, they like to be 24 to 26°C, but I think there's been a bit of a push recently for efficiency reasons to try to increase that temperature. So more and more customers are willing to operate in the 27 to 32°C, which is obviously a lot more efficient, requires a lot less power and a lot less cooling.
ML
Cooling efficient. But of course, there's a trade-off: the hotter the chips run, the shorter their lifetime.
RD
Exactly. But there's been a lot of work done by Nvidia and Intel and others to design chips that can run at those hotter temperatures.
ML
What's the lifetime of a chip?
RD
They're testing that at the moment, aren't they? So running these high compute AI models. I mean, the expectation is, easily, they'll run for four or five years, but I think there's a lot of people that are anticipating they could run for six or seven years.
ML
I'm smiling as I asked that question, because it's a controversial subject. There are some analysts saying that there's a huge problem here, because, actually, they don't last four or five years. And then there's others who say they do, and my suspicion is they probably, at the moment, don't. But they will soon, because you work on that reliability, just as EVs or batteries or anything else, they started off with short lifetimes, and then it got longer.
RD
Exactly. There's different reasons why the lifetime might be reduced, maybe the environment they're installed in, or the power fluctuations they're exposed to. So it's our job to try to make sure the environment and the power is stable for them.
ML
Okay, we've got to talk about those generators. I drove past them. They are big, they're ugly, and they burn fossil fuels. I mean, you've got electricity that you'll be purchasing. These are just the backup generators we're talking about.
RD
Exactly. So we hope that they only have to run in maintenance mode a few hours a year. Occasionally, they do need to kick in if there's a grid outage. But what we're trying to do is make sure we're using the most emission free fuel that we can. So traditionally, people use diesel in the European market to support their generators. We're using an alternative fuel called hydrogenated vegetable oil, HVO.
ML
Very good. You avoided just using the acronym. HVO, hydrogenated vegetable oil. So it's kind of diesel, but made from plant oils. And do you store some of that on the site?
RD
Yeah. So we make sure we always have 24 hours of fuel storage here for our generators, and then we have a local supplier that can top that up if we ever have an outage.
ML
Do you need to run them when they're not needed? Each year, are there a certain number of hours or days that you need to run them?
RD
Yeah, exactly. But it's a few hours a year, not too much.
ML
And that's enough to back up the whole plant? So you will ultimately have 1.2 gigawatts of those generators?
RD
Hopefully not, hopefully not. Increasingly, some of the customers that are running AI training are more willing to run a smaller number of their GPUs with generator backup. So they try to split the load into critical and not as critical, and we make sure that we have generators to back up the really critical tasks, and some of the non-critical tasks that could stop and start again, non generator backed.
ML
You've got 24 hours of on-site HVO storage. 24 hours is not that long. I mean, you could do that if you were using batteries. Certainly if it was four hours, you wouldn't bother with all of that. You would just do batteries. But by the time battery costs come down a chunk more, 24 hours you could do with batteries. I suppose that's the worst case scenario where you can't buy or get any more HVO delivered.
RD
So the 24 hours is there to make sure that even if there was a hiccup in the supply chain, that you could find a second supplier to bring in the fuel, and even if we can't use HVO, we could easily just put in diesel. So that allows for the supply chain to keep filling up those tanks. And we're planning for a worst-case disaster here where you've got a few days where the grid's out. Now, it's very unlikely to happen, but you need to make sure that the customers are supported in that scenario. It's no secret there was a power outage here a few months ago in Iberia,
ML
28th of April.
RD
Exactly, I was landing on a flight from Australia and wondered what the hell was going on. I landed to a complete blackout in Lisbon, and couldn't work out…
ML
It has to be noted, by the way, that the entire not just country, not just Portugal, but Portugal and Spain, got their grids back up from a black start in 12 hours.
RD
Which is pretty impressive with the amount of infrastructure that was taken down.
ML
Incredibly good. And you have to believe that now they've done that dry run, not dry run, actual run, that next time, it would be even quicker. I mean, that is enormously impressive. I think it's important to note that.
RD
And I think they've learned a lot from that in the sense of decoupling the two grids as well. But what I can say from a data center level was it really gave us a great test of our facility. So we'd already tested everything to make sure it worked. The UPSs were doing their thing. The generators kicked in, the supply chain kicked in as it should, and our operations team were absolutely flawless. They didn't miss a beat.
ML
And my suspicion is that if you have 24 hours on site for the whole site, what you'd actually do is have many more hours for the bits that really matter. And you would differentiate, not necessarily try to keep everything going, in that sort of utterly disaster scenario.
RD
If there was an issue with getting the supply. But I think being on HVO actually worked to our advantage. So while the hospitals were scrapping around for diesel, we had HVO. We were one of the only people in Portugal that was running on HVO. So we had trucks full of HVO coming our way whenever we needed them.
ML
Cleaning Up is supported by its Leadership Circle. The members are Actis, Alcazar Energy, Arup, Cygnum Capital, Davidson Kempner, EcoPragma Capital, EDP of Portugal, Eurelectric, the Gilardini Foundation, KKR, National Grid, Octopus Energy, Quadrature Climate Foundation, SDCL and Wärtsilä. For more information on the Leadership Circle, please visit cleaningup.live. If you've enjoyed this episode, please hit like, leave a comment, and also recommend it to friends, family, colleagues and absolutely everyone. To browse the archive of over 200 past episodes, and also to subscribe to our free newsletter, visit cleaningup.live.
ML
Okay, so we've been through the building, the cooling, the electrical system, the backup, but let's talk about the power supply. So there was talk in this project about various plots of land where you could be building solar. But I think now, if I'm right, the plan is to let other people do that, and then purchase from them, and also via the grid, a load of other sources of electricity. So talk me through what the power mix will be.
RD
Okay, so the way that we've started is to get access to the grid. We got our connections to the grid. We started by buying power from the grid while we were waiting to secure the customers and build up the load, and hence the eventual demand for the grid. So we're now both purchasing power from the grid. We have PPAs in place — power purchase agreements in place — for this first facility, and we'll increase those PPAs as we grow the load in the facility.
ML
And so PPAs, that is where you go out to somebody else whose business is developing wind, developing solar, and then you sign a contract that says, ‘we'll buy that.’
RD
Exactly. We tell them we're going to buy 2MW or 4MW over this period. And that gives them the investment they need to complete their project. So we're supporting the growth of those solar, wind and hopefully hydro projects through Portugal. We'll continue to do that as we grow out, but eventually we'd also like to supplement that with some direct wire projects. Whether we do them ourselves or we do it in partnership with a local developer, that will depend on the economics, and it'll depend on the complexity of the build. We're a data center builder, but we want to make sure that we're supplementing the grid.
ML
So you have still got the option on that land, so you could still do the private wire projects?
RD
We have a number of options in the region, and we're talking to a number of people that are already planning and have permits for wind and solar projects.
ML
So what percentage, in that 2030, 1.2 gigawatt data center all humming away, what percentage of the electricity at that point would be clean, would be renewables?
RD
So it's already 100% renewable. But there's various different ways that you can cut that right. But we need to make sure we've got 100% renewables. We also want to make sure that we can do that in a demonstrable way. We also want to make sure that we're supplementing the grid and not just taking the renewable power off the grid. So our ambition is to make sure for every gigawatt of load that we're using, that we've supplemented that in some way through direct wire agreements or PPA agreements.
ML
So additional new build renewables. Okay, now you need a big grid connection. And that's one of the uniquenesses of this site, is that it used to be a massive coal-fired power station. And the extraordinary coincidence is that yesterday, just as we were arriving here, they knocked down the chimneys of the power station. So have you inherited a whole load of the infrastructure? Because that was definitely part of the plan, wasn't it?
RD
We have. And that was one of the unique things about this site, and what made it so appealing in the first place was there was already that infrastructure in place. It already had the transmission cabling coming from the grid. It already had an amazing cooling system. It was a three gigawatt coal fired power plant that was completely cooled by the sea water. So we've been able to repurpose that existing cooling infrastructure, and we'll be using that, not only in this building, but in the future buildings.
ML
And it's not just physical infrastructure, it's the permits to be able to extract water and put it back into the sea and all those sorts of things, right?
RD
Exactly. And we're going to be doing it well within the limits that they were allowed to do it 40 to 50 years ago. We're making sure we're doing it in a much more sustainable way.
ML
That brings us to ‘what makes a good data center?’ If you look at the press, it is full of people who are building data centers. It's the front page of everything. And they're not going to be equally good in terms of whatever criteria your customers are using. And they're not going to be equally future proofed. So first of all, the customers that are going to be using this facility putting their chips in it. What are they looking for? Because there's a whole bunch of different things going on right now in terms of costs of this and speed and so on. What are they looking for? How are they weighting the different components?
RD
So I think the first baseline is, are you designing… Do you have the ability to deliver a resilient data center? And you need to make sure you have the team to support that, the experience to support that, hopefully, some sort of product that you can show them that you're actually able to do that.
ML
Your project has to be real, which is not a trivial sort of criterion.
RD
These days, it is important to say you have more than a concept, so that's the baseline. If you can pass that test, then the most important thing they're looking for is low total cost of ownership. So low capex that we translate into rent for them, but I'll explain the business model around that later. Low operational cost. So can we maintain the facility in an efficient way? Will they get low power costs through the grid? Do we have low cooling costs? And then time to market? Time to market is so important in this AI race at the moment. If you can deliver a data center within 18 months, then you're in a much better position than someone that has only just started their design.
ML
Presumably, some thought about the future and future proofing? And will it work for the next generation of chips and those sorts of things? But if they had 100 points to allocate between… Let's say the baseline is that it has to be a real project, but then 100 points to allocate between speed, cost and future proofing. Right now, where are your customers? How many points are they putting on each?
RD
I think speed is at least half of that. I think cost is probably equally weighted with future proofing. They are very focused on making sure they can deliver and support the demand that they're seeing from different AI workloads. So they want to make sure they can deliver those GPUs that they have allocated for the next 18 months. And they can get them up and running, and they don't have them sitting on a shelf somewhere. Because, as I said, it's billions of euros in that IT equipment alone.
ML
And it's becoming obsolete because Nvidia is going to launch its next generation of chips…
RD
Exactly, they need to get them to work and manifest them.
ML
So 50% speed, 25% cost, 25% future proofing, which is going to be essentially around future costs and so on. And the costs are the power cost and the cooling cost, or the total budget for power, which includes all of the cooling stuff. So sea water versus air cooling versus what others are doing. Is that what makes this such a special location and plot for a data center?
RD
It does because it's going to reduce the power cost by 20 to 30%.
ML
You can do PUEs, the acronym PUE, what's sea water versus air?
RD
So we anticipate the sea water will give us a power usage effectiveness, the PUE of around 1.1. And traditional data centers are running at a PUE of around 1.3. And that 1.3 is the ratio of the total power usage for the IT power and the cooling, over just the IT power.
ML
So turning the mathematics around, that means that by going from air cooling to sea water, you're essentially reducing the cost, well, the power budget, by about 20%. Exactly, and so over the life of a facility like this, that is pretty interesting.
RD
It's a big factor, if you consider that they're spending as much on renting the data center as they are on power. Sometimes more in the more expensive power markets. So the power bill could be as much as the rent. And it's a big factor.
ML
What about the future proofing piece? Because how can I put this… There are some players in this industry who don't seem to care nearly as much about what happens in two years time or four years time or six years time? They are so focused, perhaps more than 50% of their weighting seems to be on speed.
RD
Yeah, and often you're seeing that with the end-users themselves. Meta as an example, they are throwing up tents to put their GPUs to work. And that's great, and that's a great way of getting the building up and running within a few months. Our business model is a little bit different. We're building for the investors for the long term. So we're here to support the next 5 to 10 years of AI deployments, but also in 10, 15, 20 years, this building also needs to be ready to support cloud and other IT services. So we need to build in that flexibility.
ML
But it's a fascinating moment in economic history, because you have not just a whole bunch of companies, you have trillions of dollars going into facilities like this that have got lives potentially of 20, 30, 40 years, but an unbelievable focus on what happens in the next year or two. Let's move on to the business model of your customers and your contracts with your customers. Are they coming to you and saying, ‘Rob, I just want something for a year or two because we're in this race.’ Or are they signing 20, 30, 40, year ‘we're in this for the long term’ type contracts.
RD
So that's a bit of a live discussion. I think the industry is going through a bit of a change at the moment. Historically, if someone takes a data hall out of a larger data center like the one we're in today, they might sign a five to seven year lease agreement.
ML
That would be a sort of classic FLAPD, more the real estate model based in London or Frankfurt, five year lease?
RD
Yeah, but anywhere in this mode where the developer is leasing the space to a hyperscaler or to another customer, they would sign a five to seven year lease for use of that space. And then you would expect, and they generally do, they would renew that lease and keep going. When you're talking about building a whole new data center out of the ground, our next one, which will cost €1.5 to €2 billion, you need to make sure you have a very strong anchor tenant. So we're not going to go and be able to raise €1.5 billion of debt without knowing that we're going to get that paid back over a period. So we need to make sure that we have a good lease in place with a very credit worthy tenant. And that's the discussion the industry is having at the moment.
ML
You need a strong tenant with a strong balance sheet, but you also need more than five years of guaranteed income, right? The lease has to go beyond five years to build a facility like this.
RD
Exactly. So when you're building something out of the ground and you're doing that for one customer, or with that one customer as an initial anchor tenant, then you would expect that that lease to be 10 plus years.
ML
Okay, so the customer environment, and we all talk about the hyperscalers, right? So you've got the Amazons and the Googles and the Metas, and the OpenAI now, and Oracle and apple. I'm not sure I've got them all, but there's those. Then there is a sort of second tier, which I think is called the neo-cloud. There's a bunch of people like CoreWeave, and a whole load of, I don't want to say it, but yeah, CoreWeave clones, that are springing up. You also have investors, Davidson Kempner. But also all the private equity infra funds… Whose balance sheet actually underwrites the building of these data centers?
RD
It's a combination of things. There's lots of different deals being done where either there's a backstop of an end customer with a higher investment grade, or there's equity being put in at the neo-cloud level that's supporting some of the initial investment, or there's a three party agreement being put in place. There's multiple different things being experimented with, if I put it that way, in the industry at the moment.
ML
But it's a little bit of a spider man meme where everybody sort of says, ‘use their balance sheet.’ Because, yes, of course, CoreWeave has got some equity, but they don't want to build with equity. You don't want to build with equity. The hyperscalers could build with equity, but they can't do trillions and trillions of building just using their equity value. Can we build the number of data centers that people are planning?
RD
I think we'll build as many data centers as are needed to run the load and are actually being monetized, right? But I think the hyperscalers you talk about can build a few data centers, a few gigawatt data centers a year, but they also need to supplement their growth with the support of people like us and people like the neo-clouds, that can go and anticipate that demand. Can go and pre-purchase GPUs. Can go and get joint ventures in place, or start building shelves so that when that demand comes to the hyperscaler or the OpenAI, there is some way to put that demand.
ML
And of course, there's another player in this, which is, of course, the sovereign wealth funds. And there are balance sheets out there. It just feels like a really fascinating question of who's actually, in the end, standing behind this for long enough for a facility like this to be built. I want to move on to another topic, which, as I say, when I wrote that Green Giants report, we spent a lot of time thinking about it, which is, how do you play nicely with the community, with the grid and the local community? Do you know whether you're regarded as a kind of net benefit or a net problem, because we hear in the news some of the data centers are very much regarded as bad actors. And are you a bad actor? Are you seen as a bad actor here in Sines?
RD
I don't think so. If you look around, we're in a very industrial zone. There was a big effort made decades ago to revitalize this area with a number of industries. The coal fired power plant was one, but also there's a lot of LNG refinery going on around here. We're now entering a new phase of development in this area, with the coal fired power plant being decommissioned, we're coming in, there's a number of other industries, such as a battery manufacturing plant being put up nearby. These are all seen as greener industries. I think people are excited about what we're bringing. We're bringing revitalization, hopefully, of the local community by helping build additional housing. We're trying, at best, to use as many local people to support not only the construction of this data center, but also the operation of it after it's built. And we're working with those local schools and then, more widely, universities, to try to upskill people to support this, to take them from an LNG plant or a coal fired power plant into operating a data center.
ML
But it's a bit of a double edged sword, because you can say ‘we're creating all these jobs, they should love us,’ but you're also using all the electricians in the region, right?
RD
We are. But there also needs to be people brought into the region, and those people will hopefully want to stay. And again, that brings another form of revitalization.
ML
Are you finding the people on the site, which are not just yours but lots of contractors, are there lots of some Portuguese who have been expats, they've been working in London, or they've been working in the US, or they’ve been… Is that sort of the diaspora being summoned back to Portugal? Because finally you've got a huge project that's worth working on here?
RD
Exactly, that's been a huge benefit for us. I mean, people learned about our project, not only through your white paper, but since some of the early announcements. And they were off building data centers for hyperscalers in Finland and South America and Asia. And they heard about the project, they thought, ‘I'm ready to come back to Portugal now there's a project here for me.’ And that's been amazing. We've got so many Portuguese natives that have come back for that.
ML
That must be quite gratifying, because you're almost like reunifying families. And there must be some people quite grateful that there is this reason to come back to Europe.
RD
I think they are. And then the expats as well, are pretty grateful. I'm pretty grateful to be able to build a data center in such an amazing country.
ML
How many people are actually going to be on site. What was the peak when this was being built? Or is it a peak today? And what will it be? Are we talking hundreds? Are we talking thousands?
RD
So when this first building was going up, we were peaking at about 600 people. For the next one, we're looking at about 1500 to 2000 people. And if we're building more than one data center at a time, that number could go up to 3,000 or 4,000.
ML
And these people, some of them, will be essentially moving here. I mean, it's eight years. Are you moving here? Have you already moved? Is this where you live?
RD
After I did my due diligence on the company and read your white paper and decided to quit my job at Digital Realty, I was all in, I decided to move to Portugal straight away.
ML
So that is at peak, you're going to be a few thousand people with their families. I mean, yeah, you better build some housing, right? Otherwise you will be unpopular.
RD
Exactly. And there's a number of private and public initiatives to make sure that happens quickly. I mean, I think you saw a little bit in Sines last night, there's a lot of cranes up and a lot of apartment buildings going up, and hopefully that continues.
ML
How much time do you spend with the local municipality, the mayor, the councilors?
RD
It's an important part of not only the permitting process, but also just the wider community process. So we're spending as much time with them as we can.
ML
And then coming to the grid, you said you want it to be additional renewables. It all sounds great, right? It's marvelous. You'll be building new renewables and so on. But how do you make sure that you're not a net pain in the backside for the grid? Because obviously, you used to have a coal-fired power station, fine, yeah, but it used to pump out electricity to power Portugal. And now you're going to be at best neutral, and maybe occasionally requiring huge amounts of, I don't know, hydro. So everybody wants hydro, right? So you can make it sound very nice and ‘oh, we're nice and green.’ But at some point you are contracting the hydro to keep these servers going, and that will not be available to keep people charging their EVs or running their businesses or their heating or whatever.
RD
Oh, exactly, I think the hydro point is important, and we’re not rejecting the idea of putting in large battery storage at some point if we need to. But I think what the TSO (transmission system operator) did very well, a couple of years ago, they started looking at all the off-takers in the local region and to see which projects were really going ahead, which projects were paper projects, just to see how much transmission they would have to put in. And then obviously talking with the power generators as well to make sure there were as many solar and wind projects coming online as they would need to be able to support that load growth. So we've been really transparent with them about when we think we're going to be using the power. Yes, we've committed to making sure that we sign up to PPAs, so those projects do go ahead on time, but we need to do it in a sensible way as we grow our load.
ML
Now, you've also got 1.2 gigawatts, not yet, but you will have 1.2 gigawatts of backup generators that we've established will be running a few hours a year. And will those also be available to support the local grid? Because there'll be times when — if there's no wind, no sun, whatever — you might need them. But so might everybody, locally. I don't want to kind of over-trade the idea, but you know, the hospital and the police cars need to be charged and the ambulance. Is there a scenario where you just say, ‘Look, we're actually going to make that available to the grid?’
RD
Well, these are other things that we're looking at hand in hand. I think first and foremost, we would like not to have to install all the generators we've planned for them, but we do hope that we'll work more closely with our customers to only put generators supporting the absolute critical load. Not 100% of their load is always the critical load that needs to be running in every scenario. So we'll work with them to try to phase out some of the generators, and if we have to install the generators, and then they're not all needed for critical load, then we could look at potentially offering them up in a critical scenario.
ML
I want to get on to the final sort of question, which is to sort of step back from all this. It's an extraordinary development in its scale, in its ambition, and I'm trying to slightly find the weak points. And, you know, it's very impressive. You yourself, you got into this industry 20 years ago, and you have an engineering background, right? There was this not sleepy backwater, but it was a very kind of crunchy industry — develop these projects, build the data centers. And you saw it through from the early years of the cloud and so on, and suddenly you're in the absolute… not even the eye of the storm, it's the middle of the storm. How does that feel? Is it weird?
RD
It's very weird. For me, it never felt like a backwater, because the projects are so intense and they move so fast that it always felt like an intense industry to me.
ML
So five megawatts of data center back in 2003 or 2005 felt pretty extraordinary.
RD
The first one I did was the biggest one in Europe at the time. It was for a banking institution, and it was actually designed to go up to 24 megawatts. But that made it the biggest one in Europe at the time. And it was massive.
ML
Where was that?
RD
That was up in the north of England. And I moved to Leeds for a couple of years for that project. But I was so excited about that.
ML
The great financial center of the UK, which is Leeds. There's an enormous amount there. I'm not joking, that's an important center.
RD
Yeah, exactly. But I was fascinated by the speed of the project and how much capital needed to be poured into a relatively short timeframe. So it was put up in 18 months, and that was almost £300 million pounds at the time. And it was fascinating. So it didn't feel like backwater. It felt very intense to me, and that was very demanding. And it's been like that ever since. I think the demand for data centers is going up and up and up. That's undeniable. Yes, there's a lot of hype around AI at the moment, and we'll see where that goes, but I still think the growth and demand for data centers isn't going to go anywhere
ML
Yesterday, not only were the chimneys of the coal fired power station coming down, but Rick Perry, former Secretary of Energy in the US, has got a company called Fermi. And Fermi was IPO’d as a data center developer. I'm not sure that they've got more than what you've got here. Because you started this four years ago, and have actually got a data center. We've walked through it, it exists. It is phase one of 1.2 gigawatts. And he IPO’d, he sold 4% of the business for $600 million on the NASDAQ and the London Stock Exchange, very nice to see London playing there, but a $15 billion valuation for a set of data centers, which are kind of less real than what we're seeing here today. Do you look at that and go, ‘yes, I knew it would always be like this. This is the new dawn.’ Or do you look at that and go, ‘holy moly, we're in uncharted and possibly quite dangerous territory here.’
RD
Yeah, as I said, I look at that and think: okay, there are some projects that are overpromising, and I think that the timescale on that project just seems incredibly rapid. And yes, I'm sure it'll go ahead, and I'm sure it'll be successful.
ML
He's talking about 1 gigawatt by 2026.
RD
Incredibly quick.
ML
Forget the eight years.
RD
Incredibly quick. But I think if we continue to build sustainable, future-facing data centers with flexibility, we'll be successful. And there's a lot of demand that's coming to Europe. So I don't even think we've seen the big wave in Europe yet. I think it's coming.
ML
How quickly did Elon build his gigawatt data center? That was very, very quick.
RD
So it can be done, that was in less than a year. That can be done.
ML
If you build that fast, can it be future proofed the way this place is future proofed?
RD
I think you're gonna have a lot of problems, because once you have an operational data center, it's very hard to fix any mistakes you might have made along the way. If you're looking at a future-proofed data center that wants to run for 20 years, you need to make sure that every cable is terminated perfectly, everything is tested before you switch it on. Because it's very hard to do anything once things are running.
ML
So to a certain extent, are you sort of doing things the right way, slightly, I would say, the traditional way, and just watching these people with a bit of bemusement?
RD
I think there's space for both of us in the industry, but I think our goal is to be here for many, many years to come.
ML
And one element of future proofing is if AI ends up, I don't want to say not real, because it's clearly real and it's clearly going to be transformational, but if it ends up just not needing as many data centers, maybe it's more efficient than anybody thought, or there's some model structures that continue to evolve, DeepSeek-type innovation, what happens to this place in that scenario?
RD
Then this place will end up running as a combination. It'll be running AI training, AI inference, and other cloud products. As I said, there'll still be a need for these large-scale data centers. That thesis that you wrote four years ago is still completely true. These data centers aren't needed.
ML
Although it's incredibly expensive, it will be a low cost provider. Is that the idea?
RD
Well, this is not a very expensive data center. The main expense, as we said before, is the GPUs they're bringing in.
ML
Rob, it’s been a great pleasure, absolutely fascinating to visit. And thank you very, very much for spending time with us and for allowing us to wander around. Not everywhere, because we didn't get into the client rooms, but it was really interesting.
RD
Thank you so much for hosting me, and it's been a pleasure to host you here.
ML
Very good. Thank you.
RD
Thanks, Michael.
ML
So that was Rob Dunn, CEO of StartCampus, a portfolio company of Davidson Kempner, who are, of course, a member of our Leadership Circle. Since recording this interview, news broke that Microsoft will be making a $10 billion investment into the StartCampus Sines Data Centre. We’ll put a link to the press release about that in the show notes, alongside links to the white paper on very large out of town data centres that I co-authored and edited, entitled Green Giants; the Cleaning Up audioblog I wrote on Generative AI, read by my alter ego Mike Headroom, called The Power and the Glory; and a link to the StartCampus website. With that, it remains for me to thank our producer, cameraman and drone operator, Oscar Boyd; video editor Jamie Oliver; the team behind Cleaning Up; the Leadership Circle who make all of this possible. And you, the audience, for spending some time here with us today. Please join us this time next week for another episode of Cleaning Up.
ML
Cleaning Up is supported by its Leadership Circle. The members are Actis, Alcazar Energy, Arup, Cygnum Capital, Davidson Kempner, EcoPragma Capital, EDP of Portugal, Eurelectric, the Gilardini Foundation, KKR, National Grid, Octopus Energy, Quadrature Climate Foundation, SDCL and Wärtsilä. For more information on the Leadership Circle, please visit cleaningup.live. If you've enjoyed this episode, please hit like, leave a comment, and also recommend it to friends, family, colleagues and absolutely everyone. To browse the archive of over 200 past episodes, and also to subscribe to our free newsletter, visit cleaningup.live.