By: Adam Baker
15-Minute Averages Aren’t Giving Operators Enough Visibility into Solar Farm Operation
Having cut my professional teeth in automation for manufacturing and process industries, I can say there are processes that act slowly, and those that act at very high speeds. Data collection and control requirements vary widely between a can filling line at 2,000 cans per minute, and a transfer line in an auto assembly plant.
When I went to work for [A Very Large US-Based Solar Company], the data collection philosophy was much the same as I had seen in manufacturing. However, what I see in the wider PV industry is 15-minute data.
Here’s the problem.
If you only get a snapshot of what’s going on every 15 minutes, you may miss triggers for alarms. If some tracked value has highly variable data within that 15 minutes, you won’t see it, and that may put you at risk.
In applications where voltage control is a requirement, 50 millisecond command updates at each of the 500 inverters at the site was our standard. That was a VAR command being updated 20 times per second in order to close a process loop. To be fair, updates that granular only make sense for certain high speed control functions, and there are plenty of instances when collecting data once every 15 minutes is appropriate. But the point is, only collecting data every 15 minutes leaves you blind the other 14.
You can still have your cake and eat it too. You can still report on 15 minute averages, but to provide a 15-minute average, you have to collect data more than once every 15 minutes. However, any data point that has to do with grid conditions should be updated within seconds.
When Should I Update Data Every Few Seconds?
There are three big data points you should be updating at least every 5 seconds, and they all have to do with grid conditions.
- Voltage and current at the interconnect: While voltage and current should remain stable, it will change on a second to second basis. If voltage increases or decreases towards a point at which plant operation could be tripped, an operator will want to know ASAP, rather than realizing 14 minutes after the fact that the plant tripped offline.
- Wind speed: If the site uses trackers, the tracker system will independently monitor wind speed to move to a stow position if the wind speed creeps up, but unusual wind events have caused serious damage at solar sites across the world. Monitoring wind speed may not seem that important, but trust me. When you have a microburst of wind that’s blowing 20mph faster than your module mounting structure is rated for, you’ll wish you had second-by-second updates on wind speed so you can prove the damage was an act of nature, and not poor plant design. Even if you don’t have adjustable racking to protect your modules during an intense wind event, it’s important to have the detailed wind data for insurance purposes. If you can run a report that shows wind data at a 1 second basis, you can prove the site was designed per industry guidance and what occurred was an unusual event.
- Power factor frequency: Any inverter or transformer alarm (really, any device alarm) should definitely be updated every few seconds. That way, you can capture transient alarms that come and go. If you use a historian to collect this high speed data, you can also set it up to provide 15 minute averages for reporting reasons.
Are 15-minute data samples appropriate when measuring solar farm statuses?
15-minute data points do have their place…usually when measuring data points an operator can’t do anything about. For example, ambient air temperature. It changes slowly and there’s nothing you can do about it, which means you don’t need to monitor it on a second-by-second basis. Irradiance is another data point that doesn’t change very often, Tracer movement is slow, and infrequent, so 15-minute data for tracker position is reasonable, and even if the tracker were to stop moving, the failure mode is less than optimal collection, but the plant keeps running.
Overall, you are better served by 15-minute averages, rather than 15-minute samples on pretty much every data point collected at a solar farm. But the ability to determine what happened between a running and tripped state will be hard to determine, and prevent if the critical data is not granular enough to see the sequence of events that causes a problem. When problems occur in solar fields, they’re typically all over in the 15-minute window many sites’ data collection updates occur.
Adam Baker is Senior Sales Executive at Affinity Energy with responsibility for providing subject matter expertise in utility-scale solar plant controls, instrumentation, and data acquisition. With 23 years of experience in automation and control, Adam’s previous companies include Rockwell Automation (Allen-Bradley), First Solar, DEPCOM Power, and GE Fanuc Automation.
Adam was instrumental in the development and deployment of three of the largest PV solar power plants in the United States, including 550 MW Topaz Solar in California, 290 MW Agua Caliente Solar in Arizona, and 550 MW Desert Sunlight in the Mojave Desert.
After a 6-year stint in controls design and architecture for the PV solar market, Adam joined Affinity Energy in 2016 and returned to sales leadership, where he has spent most of his career. Adam has a B.S. in Electrical Engineering from the University of Massachusetts, and has been active in environmental and good food movements for several years.