SCADA probably isn’t telling you how well your solar site is doing.
By: Adam Baker
As long as your car starts in the morning, you probably think to yourself, “Everything must be ok!” But in reality, the fact that your car started doesn’t say much about if the car is running well.
Even if you write down mileage on every receipt to figure out your MPG from one tank of gas to the next, you can’t take into consideration how you used the car from one tank to the next. Highway miles are different than city miles. It doesn’t tell you very much without the context of how the vehicle is used.
SCADA doesn’t always show you what you need to know
Early in my career, I was given a challenge. On a 290MW solar site under construction, the first month’s energy assessment said the site was running about 1.5% below the pro forma.
With only 10% of the site complete at the time, the engineering department was worried this 1.5% problem was going to scale with the site (eventually costing the company thousands of dollars per day in underperformance).
I was asked to root cause the problem. SCADA data told me the inverters were running as they should, but the output from some was less than what we expected. Basically, SCADA didn’t tell me much about what was going on.
Long story short, after a week of different tests, we found a slight variation in inverters. 19.05 milliamps might be required on one, and 19.2 milliamps might be required on the next just to get to full output. The documentation said 19 milliamps, but in practice, inverter behavior was different than expected. To fix this problem, we had to bias up the output for all inverters at the site to get them to full output.
That’s an example of an obscure hard-to-find problem that would have turned into an expensive problem if we couldn’t find the issue. The data provided in our SCADA system, though rather comprehensive, did not point us to where the root cause might be.
There are thousands of places in your solar plant where something could be working less than 100%. It’s buried somewhere in the data, and you’ve got to have context to find it.
Should you trust your SCADA system?
Years ago, SCADA was thought to be the cure-all to keep the plant running well. But a SCADA system is only as good as the way the configurator interprets the data coming into it. Just providing numbers as they exist on the site without proper context won’t give you a lot of value.
For example, the capacity test verifies a site is capable of making the power it’s rated for. If you have a 5MW site, the test shows the site can run at 5MW for a period of time. That test tells you your inverters are reaching maximum output, but doesn’t necessary tell you much about if the site is delivering all the energy it possibly could.
Back in 2009, pretty much all sites were large rectangular arrays on very rectangular flat patches of dirt. It was great! Each inverter had 8 combiner boxes, and each combiner box had 12 inputs, and each input was made up of 8 harnesses. Engineers knew we had the same amount of DC coming into each combiner box and feeding into each inverter. It was easy to see underperformance, because each should give you the same amount of current.
These days, I see fewer and fewer rectangular sites. Now our sites have discontinuous ground (topo) and different numbers of strings going into combiner boxes. This makes determining the correct information from the SCADA system more difficult. Smaller nuanced items can be hard to find especially if they’re buried in trends of data over time that you can’t see the comparative data from one day to the next.
So, should you trust your SCADA system? The SCADA system will give you the data its collecting from the field, but the value of that data is dependent of the knowledge of the SCADA implementer. It’s up to them to know how to interpret the data and give you meaningful information.
How to decide if your plant is functioning well
Three general areas are most impactful in determining whether a site is running well or not.
1. Finding opportunities to normalize data.
It’s difficult to know just by looking at this graph whether the site is doing well or not. This really relies on the operator’s (fleeting) knowledge of how it *should* be running. Normalizing the data by taking the current from each combiner box, and dividing that by the number of watts/strings will give you a way of understanding actual values.
Instead of looking at the raw values, we’re taking the total current and dividing the number of strings from each combiner box. Now you can see combiner box 1 and 4 are a little less than the ones around them. That’s indicative of a DC health issue. You don’t have to know how many strings go into each combiner box. All you’re doing are looking for the differences to find underperformance, and you can get to that conclusion very quickly.
2. Performance analysis.
Performance analysis is a challenging area. There’s an easy temptation is to ask how the site is performing today when compared to yesterday. Trended data might look the same over time, but the aggregation of individual points within that trend are a more meaningful comparison.
To say that on March 1, 2018, the inverter energy was X, and on March 1, 2017 the inverter energy was Y, that might not be an accurate comparison if the irradiance or temperature were different. You might want to try and find a day within a week ahead or behind a particular date to find similar temperature and irradiance conditions to make a more valuable comparison.
3. DC health.
Finding individual problems in DC health is one of the most challenging O&M tasks. I’m sure you’ve seen an infrared camera on a drone finding hot spots and helping to diagnose DC health issues. It’s interesting tech that gives you some focus on where to look at a site, but I don’t like that it doesn’t give you quantifiable values of how the site is doing. All it’s saying is that there are hot spots here, or these modules are running warmer than others. Then you need to go do a visual investigation.
Our approach at Affinity Energy has been to go out and do freelance data collection. We go out to a site with some DC CT’s, clamp them on individual strings and harnesses going into combiner boxes, and spot monitor to provide quantifiable values that tell us whether or not strings are underperforming.
If we find a combiner box issue after normalizing data, finding the specific string contributing to that problem can be expedited after clamping on some CTs, and looking at the data for a day or two. It’s usually pretty easy to find the specific string or harness that’s underperforming, when compared to the rest. Our Solar String Analysis solution makes it so you’re only looking at a couple dozen modules… instead of a couple hundred.
In summary, we have some solutions that take real-time data, put wrappers around it to normalize it, understand what it’s telling you, and make it easy to find the differences from one area of the site to another. Once an area has been identified, it’s simple to do the due diligence to figure out exactly where that problem occurs.
Adam Baker is Senior Sales Executive at Affinity Energy with responsibility for providing subject matter expertise in utility-scale solar plant controls, instrumentation, and data acquisition. With 23 years of experience in automation and control, Adam’s previous companies include Rockwell Automation (Allen-Bradley), First Solar, DEPCOM Power, and GE Fanuc Automation.
Adam was instrumental in the development and deployment of three of the largest PV solar power plants in the United States, including 550 MW Topaz Solar in California, 290 MW Agua Caliente Solar in Arizona, and 550 MW Desert Sunlight in the Mojave Desert.
After a 6-year stint in controls design and architecture for the PV solar market, Adam joined Affinity Energy in 2016 and returned to sales leadership, where he has spent most of his career. Adam has a B.S. in Electrical Engineering from the University of Massachusetts, and has been active in environmental and good food movements for several years.